# Mixpanel MCP Prompt Library — Agent Reference

> Optimized for LLMs and AI agents. Use as a context file (Claude project, Cursor, etc.) or reference loaded by an MCP skill.

## How to use this library

Match user intent to a prompt below. Replace `[brackets]` with specifics. Resolve `~~category` placeholders to whatever tool is connected (see Connectors). When a request spans multiple prompts, compose them into a single workflow — do not ask the user to run them separately.

## Rules

1. **Start with schema discovery.** Call Get-Events and Get-Event-Details to confirm event names before any analysis. Do not guess.
2. **Every analysis prompt needs four things:** behavior (events), population (who), timeframe (when), answer shape (rate, trend, breakdown).
3. **Verify properties before breakdowns.** Call Get-Property-Names and Get-Property-Values first. Wrong names fail silently.
4. **Flows for discovery, Funnels for measurement.** Use `chartType=sankey` for Flows, especially with `stepsBefore`.
5. **Prefer property filters over cohorts.** Cohorts are supported, but `plan_type = enterprise` is more transparent than a named cohort.
6. **Check data quality before critical analysis.** Call Get-Issues on unvalidated events. Broken events produce confident wrong answers.
7. **Cross-tool prompts require connected MCPs.** Check what's available before suggesting. Only offer chains where a connector is active.
8. **Writes require Admin or Project Owner.** If a write returns an error or empty response, inform the user about the permission requirement. Do not retry.
9. **Rate limit: 600 requests/hour.** Break 20+ query sessions into multiple sessions.
10. **Always use explicit dates.** Anchor all timeframes to today's date in ISO format. Do not rely on "last 30 days" without computing the actual range.
11. **Set the project first.** If Get-Projects returns multiple and the user hasn't specified one, ask before running queries. Do not default.

## Anti-patterns

Never do these:
- **Don't run a funnel without confirming event names.** The query will succeed with zero results and look like a real finding.
- **Don't assume a property exists because it sounds right.** Always verify with Get-Property-Names.
- **Don't filter by cohort name as a string.** Cohort IDs are required for cohort-based filters.
- **Don't interpret zero results as "this doesn't happen."** Check the event name, property values, and date range first.
- **Don't present a single data point as a trend.** If you only have one period, say so — don't frame it as directional.

## Confidence signals

Flag uncertainty in these situations:
- Query returns zero results → verify event name and filters before interpreting.
- Retention or conversion looks unusually high or low → call out sample size.
- AI-generated severity or prioritization → note it's inference, not a score from the API.
- Flows return fewer paths than requested → note the data may be sparse for this timeframe.

---

## Connectors

Cross-tool prompts use `~~category` as a placeholder. Resolve to the connected tool at runtime. Before suggesting a cross-tool workflow, check which MCP servers are available. If detection isn't possible, ask once at session start: "Which other tools are connected alongside Mixpanel?"

| Placeholder | Chain with Mixpanel | Examples |
|---|---|---|
| `~~chat` | Search feedback, team threads about features/bugs | Slack, Teams |
| `~~calendar` | Meeting prep — pull usage for upcoming accounts | Google Calendar, Outlook |
| `~~project tracker` | Match drop-offs to bugs/tasks, file issues | Jira, Linear, Asana |
| `~~error monitoring` | Correlate error events with exceptions | Sentry, Datadog, Bugsnag |
| `~~knowledge base` | Export data dictionaries, write findings | Notion, Confluence |
| `~~CRM` | Enrich accounts with deal stage, ARR, owner | Salesforce, HubSpot |
| `~~product feedback` | Match usage with feature requests, complaints | Productboard, Canny, Zendesk |
| `~~dev environment` | Bring usage data into the building workflow | Cursor, Lovable, Replit, Windsurf |
| `~~enterprise search` | Find related internal docs, past analyses | Glean, Kendra |
| `~~email` | Draft account updates, share summaries | Gmail, Outlook |
| `~~data warehouse` | Join behavioral data with revenue/billing | BigQuery, Snowflake, Databricks |
| `~~ai observability` | Correlate LLM performance with user behavior | Langfuse, Helicone, Braintrust |

---

## Prompts

Fields: `section`, `category`, `title`, `intent` (what the user is trying to do), `prompt`, `tools`, `response_format` (how to present results), and optional `caveat`, `follow_up`, `chain_with`, `connectors`, `tags`.

```yaml
# ============================================================
# ORIENT — Schema discovery. Run these first.
# ============================================================

- section: orient
  title: Find your projects
  intent: User needs to identify which Mixpanel project to work with
  prompt: "What Mixpanel projects do I have access to? List them with their IDs."
  tools: [Get-Projects]
  response_format: Table of project name and ID
  tags: [setup, project-id]

- section: orient
  title: Map schema to a business question
  intent: User wants to know which events represent a behavior they care about
  prompt: "I want to understand [onboarding completion / feature adoption / checkout conversion] in this project. Look at the available events and properties and tell me which ones best represent that behavior. Suggest the top 3–5 candidates and explain your reasoning."
  tools: [Get-Events, Get-Event-Details]
  response_format: Ranked list of candidate events with rationale
  caveat: Quality depends on Lexicon descriptions.
  follow_up: "Give me the full details on [Event Name], including any data quality issues."
  tags: [schema, events, discovery]

- section: orient
  title: Understand what a specific event means
  intent: User wants full context on a single event
  prompt: "Give me the full details on the [Event Name] event: its description, what properties are attached, when it fires, and any data quality issues. Tell me if anything looks off."
  tools: [Get-Event-Details, Get-Issues, Get-Property-Names]
  response_format: Event summary with properties list and any quality flags
  tags: [schema, event-details, data-quality]

- section: orient
  title: Find your activation signal
  intent: User wants to identify the best activation event for their product
  prompt: "Based on the events in this project, what's the best candidate for an activation event — something that predicts whether a new user will stick around? Show me the most relevant events and explain why each one does or doesn't fit."
  tools: [Get-Events, Get-Event-Details]
  response_format: Recommended activation event with reasoning
  caveat: Reasoning task only — cannot validate statistically. Follow up with retention query.
  chain_with: [N-day retention from a starting event]
  tags: [activation, discovery, reasoning]

- section: orient
  title: Explore properties and their actual values
  intent: User needs to know what filters and breakdowns are available
  prompt: "What properties are available on the [Event Name] event? For the ones useful as filters or breakdowns (plan type, device, user role), show me the actual values each one contains."
  tools: [Get-Property-Names, Get-Property-Values, Get-Property]
  response_format: Property list with sample values for each
  caveat: Run this before any breakdown prompt.
  tags: [schema, properties, filters, breakdowns]

- section: orient
  title: Get a direct link to Lexicon
  intent: User wants to edit an event in the Mixpanel UI
  prompt: "Give me the Lexicon URL for the [Event Name] event so I can go edit it in the Mixpanel UI."
  tools: [Get-Lexicon-URL]
  response_format: Direct URL
  tags: [lexicon, link]

# ============================================================
# ANALYZE — Funnels, retention, flows, adoption, trends.
# ============================================================

# -- Funnels --

- section: analyze
  category: funnels
  title: Conversion funnel with a specific population
  intent: User wants to measure conversion through a series of steps
  prompt: "What's the conversion rate from [Step 1] to [Step 2] to [Step 3] for [first-time users / paid accounts / mobile users] over the last 30 days? Show me where the biggest drop-off is."
  tools: [Get-Query-Schema, Run-Query]
  response_format: Step-by-step conversion with drop-off percentages. Lead with the biggest drop.
  caveat: Run Orient prompts first if unsure which property to filter on.
  follow_up: "Break that down by [plan type / device / channel]."
  chain_with: [Discover what happens between two funnel steps, Replay after identifying users via analysis]
  tags: [funnels, conversion, population, breakdown]

- section: analyze
  category: funnels
  title: Week-over-week funnel trend
  intent: User wants to see if conversion is improving or declining
  prompt: "Show me how our conversion rate from [Event A] to [Event B] has changed week over week for the past 8 weeks. Flag any weeks where conversion dropped more than 10%."
  tools: [Get-Query-Schema, Run-Query]
  response_format: Weekly conversion rates with flagged drops
  caveat: Flagging is AI reasoning, not a tool feature.
  tags: [funnels, trends, week-over-week]

- section: analyze
  category: funnels
  title: Funnel breakdown by segment
  intent: User wants to compare conversion across segments
  prompt: "Break down our [Event A] to [Event B] conversion by [plan type / acquisition channel / device type] for the last 60 days. Which segment converts best, which converts worst, and what's the gap?"
  tools: [Get-Query-Schema, Run-Query]
  response_format: Segments ranked by conversion rate with gap analysis
  caveat: Verify the property name exists before running.
  tags: [funnels, breakdown, segments]

# -- Retention --

- section: analyze
  category: retention
  title: N-day retention from a starting event
  intent: User wants to measure how many users come back after a key action
  prompt: "What's the 1, 7, 14, and 30-day retention for users who completed [Starting Event] in the last 90 days? Use [Return Event] as the retention signal."
  tools: [Get-Query-Schema, Run-Query]
  response_format: Retention curve — D1, D7, D14, D30 percentages
  follow_up: "Compare that to users who completed [Alternative Starting Event]."
  tags: [retention, n-day]

- section: analyze
  category: retention
  title: Retention comparison across two time periods
  intent: User wants to know if retention is getting better or worse
  prompt: "Compare 30-day retention for users who signed up in [Month 1] versus [Month 2]. Did retention improve? Which part of the curve changed: early drop-off or long-term engagement?"
  tools: [Get-Query-Schema, Run-Query]
  response_format: Side-by-side retention curves with delta analysis
  caveat: Runs two queries and compares. Needs sufficient users in both periods.
  tags: [retention, comparison, time-periods]

# -- Feature Adoption --

- section: analyze
  category: feature-adoption
  title: Adoption rate for a specific feature
  intent: User wants to know what percentage of their users have tried a feature
  prompt: "What percentage of [active users / paid accounts / users who completed onboarding] have used [Feature Event] at least once in the last 30 days? How does that compare to the 30 days before?"
  tools: [Get-Query-Schema, Run-Query]
  response_format: Adoption percentage, current vs. prior period with delta
  tags: [adoption, feature, period-comparison]

- section: analyze
  category: feature-adoption
  title: Feature usage depth
  intent: User wants to know how heavily a feature is used, not just whether it's tried
  prompt: "For users who have used [Feature Event] at least once, how many times do they use it on average per week? Break it down by [plan type / user role] if those properties exist."
  tools: [Get-Property-Names, Get-Query-Schema, Run-Query]
  response_format: Average frequency per segment
  caveat: Verify breakdown property exists first.
  tags: [adoption, frequency, breakdown]

# -- Trends --

- section: analyze
  category: trends
  title: Event volume trend
  intent: User wants to spot anomalies in event volume
  prompt: "Show me the daily volume of [Event Name] over the last 30 days. Flag any days with a spike or drop greater than 20% from the 7-day average."
  tools: [Get-Query-Schema, Run-Query]
  response_format: Daily counts with flagged anomalies and magnitude
  caveat: Spike detection is AI reasoning. For persistent monitoring, recreate in Mixpanel with an alert.
  tags: [trends, volume, anomaly-detection]

- section: analyze
  category: trends
  title: Pull a saved report
  intent: User wants to retrieve an existing report by name
  prompt: "Retrieve the report named [Report Name] and show me the current results."
  tools: [Get-Report]
  response_format: Report results summarized with headline finding
  caveat: Use Search-Entities to find reports by name if needed.
  tags: [reports, saved, retrieval]

- section: analyze
  category: trends
  title: Comparative period analysis
  intent: User wants a high-level health check across multiple metrics
  prompt: "Compare this month to last month across these three metrics: [DAU], [Signup-to-Purchase conversion], and [7-day retention]. Summarize what improved, what got worse, and what stayed flat."
  tools: [Run-Query]
  response_format: Three metrics, each with current/prior/delta. Lead with what changed most.
  caveat: 6-query session. Keep each query simple — add breakdowns as a follow-up.
  follow_up: "The conversion rate dropped. Break that funnel down by [channel / device] to see which segment drove the change."
  tags: [trends, comparison, multi-metric]

# -- Flows --

- section: analyze
  category: flows
  title: What do users do after a key event?
  intent: User wants to discover what happens next after a specific action
  prompt: "Show me the 3 most common steps users take after [Event Name] in the last 30 days. Where do they go, and how many drop off at each step?"
  tools: [Get-Query-Schema, Run-Query]
  response_format: Top paths with step counts and drop-off at each node
  caveat: Use stepsAfter=3, stepsBefore=0, chartType=sankey.
  follow_up: "Now show me the same thing but only for users where [plan_type = free]."
  tags: [flows, paths, post-event, sankey]

- section: analyze
  category: flows
  title: What leads to a conversion event?
  intent: User wants to understand the paths that precede a key conversion
  prompt: "What are the 3 most common paths users take in the steps before [Purchase Completed / Subscription Started / Activation Event] in the last 30 days? Which paths are most common?"
  tools: [Get-Query-Schema, Run-Query]
  response_format: Pre-conversion paths ranked by volume
  caveat: Use stepsBefore=3, stepsAfter=0, chartType=sankey. Paths chart type may return empty with stepsBefore.
  tags: [flows, paths, pre-event, conversion, sankey]

- section: analyze
  category: flows
  title: Compare paths between two segments
  intent: User wants to see where different user groups diverge in behavior
  prompt: "Compare the top user paths after [Onboarding Completed] for [free users] versus [paid users] in the last 30 days. Where do their journeys diverge?"
  tools: [Get-Query-Schema, Run-Query]
  response_format: Side-by-side dominant paths with divergence points highlighted
  caveat: Runs two queries with different filters and compares.
  tags: [flows, segments, comparison]

- section: analyze
  category: flows
  title: Discover what happens between two funnel steps
  intent: User has a funnel drop-off and wants to understand what users do instead
  prompt: "I have a funnel from [Step A] to [Step B] where conversion is low. Show me the most common paths users take between those two events. What are they doing instead of converting?"
  tools: [Get-Query-Schema, Run-Query]
  response_format: Intermediate paths with counts, highlighting which lead to conversion vs. drop-off
  tags: [flows, funnels, diagnostic, drop-off]

# -- Feature Launch --

- section: analyze
  category: feature-launch
  title: End-of-sprint feature assessment
  intent: User shipped a feature and wants a full evaluation
  prompt: "We shipped [Feature Name] two weeks ago. Run these analyses in order: 1) weekly adoption trend (unique users of [Feature Event] per week for the last 4 weeks), 2) a funnel from [first use] to [repeat use], 3) retention for users who adopted vs. users who didn't, and 4) the top paths users take after [Feature Event]. Summarize whether the launch is on track."
  tools: [Run-Query]
  response_format: Four-part summary — adoption trend, repeat conversion, retention delta, path analysis. End with go/no-go assessment.
  caveat: 4–7 tool call session.
  chain_with: [Replay for a specific user, Create a dashboard from scratch]
  tags: [feature-launch, adoption, funnels, retention, flows, multi-step]

# ============================================================
# INVESTIGATE — User-level and session-level analysis.
# ============================================================

- section: investigate
  title: Replay for a specific user
  intent: User has a user ID and wants to see what they did
  prompt: "Pull session replays for user [distinct_id] from the last 14 days. I want to see what they were doing around the time they [dropped off / stopped using the feature / triggered an error]."
  tools: [Get-User-Replays-Data]
  response_format: Replay links with session timestamps and summary of key actions
  caveat: Works best with a user ID from prior analysis or a support ticket.
  tags: [replay, user, session]

- section: investigate
  title: Replay after identifying users via analysis
  intent: User wants to find drop-off users and watch their sessions
  prompt: "Step 1: Show me the user IDs of users who reached [Step N] but did not complete [Final Event] in the last 7 days. Return up to 10 distinct_ids. Step 2: Now pull session replays for those users."
  tools: [Get-Query-Schema, Run-Query, Get-User-Replays-Data]
  response_format: User IDs with replay links. Note which step each user reached.
  caveat: MCP returns replay metadata and links. Recordings open in Mixpanel UI.
  tags: [replay, funnels, drop-off, two-step]

- section: investigate
  title: Diagnose a drop-off user
  intent: User wants to understand why a specific person stopped using the product
  prompt: "User [distinct_id] was active last month but hasn't logged in for 2 weeks. Pull their last 30 days of activity and any session replays from their final sessions. What were they doing before they stopped?"
  tools: [Run-Query, Get-User-Replays-Data]
  response_format: Activity timeline showing usage arc, with replay links for final sessions
  tags: [replay, churn, user, drop-off, activity]

- section: investigate
  title: Review a user's recent sessions
  intent: User wants to see whether engagement is getting deeper or shallower
  prompt: "Pull the last 5 sessions for user [distinct_id]. For each session, show me what features they used, how long the session lasted, and where they exited. Are their sessions getting shorter or less engaged over time?"
  tools: [Get-User-Replays-Data, Run-Query]
  response_format: Session-by-session breakdown with trajectory assessment
  follow_up: "Show me the replay for their shortest session."
  tags: [replay, sessions, engagement, trajectory]

- section: investigate
  title: Account-level investigation
  intent: User wants a usage overview for a specific company or account
  prompt: "Show me all activity for users at [Account Name / company_id] over the last 30 days. Which features are they using most? Where are they dropping off?"
  tools: [Get-Events, Run-Query]
  response_format: Feature usage ranked by volume, with drop-off points flagged
  caveat: Requires company properties on events (e.g., company_id).
  follow_up: "Pull session replays for their most active user."
  tags: [account, investigation, features, engagement]

# ============================================================
# BUILD — Dashboards and project artifact management.
# ============================================================

- section: build
  category: dashboards
  title: Create a dashboard from scratch
  intent: User wants a persistent dashboard with specific reports
  prompt: "Create a Mixpanel dashboard called [Dashboard Name] with these reports: 1) daily active users over the last 30 days, 2) signup-to-purchase conversion funnel, 3) 7-day retention for new users. Add a text card at the top summarizing what this board tracks."
  tools: [Create-Dashboard]
  response_format: Confirm dashboard created with link
  caveat: Dashboards persist in Mixpanel after the conversation ends.
  follow_up: "Add a fourth report showing feature adoption by plan type."
  tags: [dashboard, create, reports, persist]

- section: build
  category: dashboards
  title: Build a weekly growth dashboard
  intent: User wants an ongoing growth tracking board
  prompt: "Build a dashboard that tracks signups, activations, and churn week over week for the last 12 weeks. Include a text card at the top explaining the metric definitions."
  tools: [Create-Dashboard]
  response_format: Confirm dashboard created with link
  caveat: Discover event names via Orient prompts first if unsure.
  tags: [dashboard, growth, weekly]

- section: build
  category: dashboards
  title: Duplicate and customize an existing dashboard
  intent: User wants a copy of a dashboard with modifications
  prompt: "Duplicate the [Dashboard Name] dashboard and change the date range to last quarter. Rename it [New Dashboard Name]."
  tools: [Get-Dashboard, Duplicate-Dashboard]
  response_format: Confirm new dashboard with link
  tags: [dashboard, duplicate, customize]

- section: build
  category: dashboards
  title: Update a dashboard's layout
  intent: User wants to rearrange or add to an existing dashboard
  prompt: "In the [Dashboard Name] dashboard, move the retention report to the top row and add a new text card below it explaining how we define retention."
  tools: [Get-Dashboard, Update-Dashboard]
  response_format: Confirm layout updated
  caveat: Call Get-Dashboard with include_layout=True first to get cell/row IDs.
  tags: [dashboard, layout, update]

- section: build
  category: audit
  title: List and audit existing dashboards
  intent: User wants to find and clean up stale dashboards
  prompt: "What dashboards exist in this project? Which ones haven't been updated in 90 days?"
  tools: [Search-Entities]
  response_format: Dashboard list with last-edited dates, stale ones flagged
  caveat: Search-Entities also finds experiments, flags, metric trees, playlists, heat maps, cohorts — but can only drill into dashboards and reports.
  follow_up: "Delete the ones that are stale. Actually, show me their contents first."
  tags: [audit, dashboards, stale]

- section: build
  category: audit
  title: Full project artifact inventory
  intent: User wants to understand everything that exists in the project
  prompt: "Give me an inventory of everything in this project: dashboards, reports, cohorts, experiments, feature flags, and metric trees. For each type, tell me the count and when the most recent one was last edited."
  tools: [Search-Entities]
  response_format: Entity types with counts and last-edited dates
  caveat: Metadata only for experiments, flags, and metric trees — can't inspect configuration.
  tags: [audit, inventory, artifacts, hygiene]

- section: build
  category: audit
  title: Find stale experiments and feature flags
  intent: User wants to identify experiments and flags that need cleanup
  prompt: "List all experiments and feature flags in this project. Which ones haven't been edited in the last 60 days? Who created them?"
  tools: [Search-Entities]
  response_format: List with creator, last-edited date, stale flag
  caveat: No access to variants, targeting rules, or results.
  tags: [audit, experiments, feature-flags, stale]

# ============================================================
# GOVERN — Lexicon management and data quality.
# ============================================================

- section: govern
  category: lexicon
  title: Add descriptions to undocumented events
  intent: User wants to fill in missing Lexicon descriptions at scale
  prompt: "Find all events that don't have a description in Lexicon. For each one, suggest a description based on the event name and its properties. Then apply the descriptions."
  tools: [Get-Events, Get-Event-Details, Edit-Event]
  response_format: List of events with suggested descriptions, confirm before applying
  caveat: Review before confirming — AI can guess wrong with internal shorthand.
  tags: [lexicon, descriptions, governance, write]

- section: govern
  category: lexicon
  title: Tag related events
  intent: User wants to organize events by category
  prompt: "Tag all events related to [checkout / onboarding / search] with the tag \"[Tag Name]\". Create the tag if it doesn't exist yet."
  tools: [Get-Events, Create-Tag, Edit-Event]
  response_format: Count of events tagged, list of names
  follow_up: "Now do the same for all properties on those events."
  tags: [lexicon, tags, governance, write]

- section: govern
  category: lexicon
  title: Hide inactive events
  intent: User wants to clean up unused events from the UI
  prompt: "Find all events that haven't fired in the last 90 days and hide them in Lexicon."
  tools: [Get-Events, Edit-Event]
  response_format: Count hidden, list of event names
  caveat: Hidden events are still queryable by name.
  tags: [lexicon, hide, cleanup, write]

- section: govern
  category: lexicon
  title: Flag PII properties
  intent: User wants to audit properties for personally identifiable information
  prompt: "Look at all properties across all events. Flag any that look like they might contain personally identifiable information (email, phone, name, IP address) but aren't marked as sensitive yet."
  tools: [Get-Events, Get-Property-Names, Get-Property-Values, Edit-Property]
  response_format: List of flagged properties with reasoning. Starting audit, not final pass.
  tags: [lexicon, pii, sensitive, governance, write]

- section: govern
  category: lexicon
  title: Rename a tag across the project
  intent: User wants to rename a tag globally
  prompt: "Rename the tag \"[Old Tag]\" to \"[New Tag]\" across all events and properties that use it."
  tools: [Rename-Tag]
  response_format: Confirm rename with count of affected entities
  tags: [lexicon, tags, rename, write]

- section: govern
  category: data-quality
  title: Audit data quality for a specific event
  intent: User wants to know if a specific event has problems
  prompt: "Are there any open data quality issues for the [Event Name] event? Summarize what's broken, when each issue was first detected, and how long it's been flagged. Tell me which ones to fix first."
  tools: [Get-Issues]
  response_format: Issues ranked by age with recommended fix order
  caveat: Prioritization is inference, not an API severity score.
  tags: [data-quality, audit, event-specific]

- section: govern
  category: data-quality
  title: Full project health check
  intent: User wants a broad view of data quality across the project
  prompt: "Run a data quality audit on this project. What are the most critical open issues across all events and properties? Group them by severity and tell me which ones are most likely affecting analysis right now."
  tools: [Get-Issues]
  response_format: Issues grouped by inferred severity, most impactful first
  tags: [data-quality, audit, project-wide]

- section: govern
  category: data-quality
  title: Dismiss resolved issues in bulk
  intent: User wants to clean up resolved data quality issues
  prompt: "Dismiss all data quality issues for events we deprecated last quarter. Specifically, dismiss issues for [Event A], [Event B], and [Event C]."
  tools: [Dismiss-Issues]
  response_format: Confirm dismissed with count
  tags: [data-quality, dismiss, cleanup, write]

- section: govern
  category: data-quality
  title: Check data quality before critical analysis
  intent: User is about to run an important analysis and wants to verify the data first
  prompt: "Before I run a funnel analysis on [Event A] through [Event C], check for any data quality issues on those three events. Tell me if anything would make the results unreliable."
  tools: [Get-Issues]
  response_format: Clean/not-clean verdict per event, with details on any issues found
  chain_with: [Conversion funnel with a specific population]
  tags: [data-quality, pre-check, funnels]

- section: govern
  category: data-quality
  title: Escalate stale data quality issues
  intent: User wants a formatted list of old issues to send to their data team
  prompt: "Find all data quality issues that have been open for more than 14 days. For each one, give me the event name, issue description, and how long it's been flagged. Format it so I can paste it into a message to the data engineering team."
  tools: [Get-Issues]
  response_format: Copy-pasteable list formatted for chat or issue tracker
  tags: [data-quality, escalation, stale, formatting]

- section: govern
  category: data-quality
  title: Data quality trend check
  intent: User wants to know if data quality is improving or degrading
  prompt: "How many open data quality issues does this project have right now? Compare that to the issues created in the last 30 days versus the last 60 days. Are we accumulating issues faster than we're resolving them?"
  tools: [Get-Issues]
  response_format: Issue counts by window with directional trend
  caveat: Directional only — dismissed issues drop from the count.
  tags: [data-quality, trend, accumulation]

# ============================================================
# CHAIN — Cross-tool and file-based workflows.
# ============================================================

# -- Attached Files --

- section: chain
  category: attached-files
  title: Correlate a metric change with external events
  intent: User sees a metric shift and wants to understand what caused it
  prompt: "Our [metric] changed between [date range]. I'm attaching our [campaign calendar / release notes / experiment log]. Does the timing correlate with anything in this document? What's the most likely explanation?"
  tools: []
  response_format: Timeline alignment showing which external events coincide with the metric change
  caveat: No MCP tools — AI reasons across query results and attached file.
  tags: [synthesis, file-attachment, correlation]

- section: chain
  category: attached-files
  title: QBR prep with template
  intent: User is preparing a quarterly business review for an account
  prompt: "I'm preparing a QBR for [Account Name]. I've attached our slide template. Pull their usage data for the last quarter, then fill in the template with: feature adoption rates, engagement trends, and any areas of concern."
  tools: [Run-Query]
  response_format: Filled template sections with data
  caveat: Requires file attachment support.
  tags: [qbr, template, account, file-attachment]

- section: chain
  category: attached-files
  title: Strategic synthesis
  intent: User wants to combine analytics findings with external context for recommendations
  prompt: "Based on the analysis results above, plus the [benchmark doc / customer interview notes / competitive intel] attached, what are the two or three highest-priority changes for [the onboarding flow / this feature / this segment's experience] going into next quarter? Be direct."
  tools: []
  response_format: Prioritized recommendations with supporting evidence from both data and context
  caveat: No MCP tools — synthesis only. Best after running analysis prompts.
  tags: [synthesis, strategy, file-attachment]

# -- Cross-Tool --

- section: chain
  category: cross-tool
  title: Correlate usage data with customer feedback
  intent: User wants to combine quantitative usage with qualitative sentiment
  prompt: "Pull adoption numbers for [Feature X] this week from Mixpanel, then search ~~chat for any mentions of [Feature X] in customer-facing channels. Summarize the quantitative data and qualitative feedback together."
  tools: [Run-Query]
  connectors: [chat]
  response_format: Usage metrics + feedback themes, with assessment of whether sentiment matches the numbers
  caveat: Search is keyword-based — add alternate names if customers call the feature something different.
  tags: [cross-tool, chat, feedback, adoption]

- section: chain
  category: cross-tool
  title: Meeting prep with usage data
  intent: User has upcoming meetings and wants usage context for each account
  prompt: "Check ~~calendar for external meetings this week. For each company, pull their usage data from Mixpanel for the last 30 days. Summarize each account in 3 bullets with anything that changed week-over-week."
  tools: [Run-Query]
  connectors: [calendar]
  response_format: Per-account bullets with usage trends
  caveat: Project needs a company/account property on events.
  tags: [cross-tool, calendar, meetings, account, prep]

- section: chain
  category: cross-tool
  title: Match drop-offs to open bugs or tasks
  intent: User wants to check if known issues explain funnel drop-offs
  prompt: "Show me the top 3 drop-off points in our [Core Funnel] from Mixpanel. Then check ~~project tracker for any open issues or tasks related to those drop-off points."
  tools: [Run-Query]
  connectors: [project_tracker]
  response_format: Drop-off points paired with matching issues (or noted as untracked)
  caveat: Results depend on how the team titles issues. No matches is useful information.
  tags: [cross-tool, project-tracker, bugs, funnels]

- section: chain
  category: cross-tool
  title: Correlate errors with user impact
  intent: User wants to connect backend errors with frontend behavioral impact
  prompt: "Pull the users who triggered [Error Event] more than 3 times this week in Mixpanel. Then check ~~error monitoring for exceptions from those same users or matching error messages. Are these the same root cause or different bugs?"
  tools: [Run-Query]
  connectors: [error_monitoring]
  response_format: User overlap analysis with root cause assessment
  caveat: If different user identifiers across tools, specify how to match (email, distinct_id).
  tags: [cross-tool, error-monitoring, debugging]

- section: chain
  category: cross-tool
  title: Export schema to team wiki
  intent: User wants their Mixpanel schema documented in an accessible format
  prompt: "Pull the schema for this project — all events, their descriptions, and their properties. Then create a page in ~~knowledge base with a formatted data dictionary."
  tools: [Get-Events, Get-Event-Details]
  connectors: [knowledge_base]
  response_format: Formatted data dictionary page
  tags: [cross-tool, knowledge-base, data-dictionary, schema]

- section: chain
  category: cross-tool
  title: Enrich account investigation with CRM data
  intent: User wants usage data combined with deal context
  prompt: "Pull usage data for [Account Name] from Mixpanel for the last 30 days. Then check ~~CRM for their deal stage, ARR, renewal date, and account owner. Combine into a single account summary."
  tools: [Run-Query]
  connectors: [crm]
  response_format: Unified account brief — usage metrics + deal context
  caveat: Requires company properties on Mixpanel events and matching account in CRM.
  tags: [cross-tool, crm, account, enrichment]

- section: chain
  category: cross-tool
  title: Match usage patterns with feature requests
  intent: User wants to connect behavioral data with what customers are asking for
  prompt: "Pull the top features by usage volume this month from Mixpanel. Then search ~~product feedback for feature requests or complaints related to those features. Where does usage data disagree with customer sentiment?"
  tools: [Run-Query]
  connectors: [product_feedback]
  response_format: Features ranked by usage with matched feedback. Highlight mismatches.
  tags: [cross-tool, product-feedback, feature-requests, sentiment]

- section: chain
  category: cross-tool
  title: Bring usage data into a dev workflow
  intent: Developer wants analytics context while building
  prompt: "Pull the adoption and drop-off data for [Feature Area] from Mixpanel. Summarize which parts users engage with most and where they struggle. I'm building in ~~dev environment and need this context to prioritize what to fix."
  tools: [Run-Query]
  connectors: [dev_environment]
  response_format: Concise usage summary optimized for dev context — what works, what doesn't, where to focus
  tags: [cross-tool, dev-environment, building, prioritization]

- section: chain
  category: cross-tool
  title: Find related internal context
  intent: User wants to find prior analyses, docs, or discussions related to their Mixpanel findings
  prompt: "I just found that [metric or finding] in Mixpanel. Search ~~enterprise search for any related internal documents, past analyses, or strategy memos that might provide context."
  tools: []
  connectors: [enterprise_search]
  response_format: List of related documents with relevance summary
  tags: [cross-tool, enterprise-search, context, discovery]

- section: chain
  category: cross-tool
  title: Draft and share analysis summary
  intent: User wants to send findings to stakeholders
  prompt: "Summarize the analysis we just completed in 5 bullet points. Draft an ~~email to [recipient] with the subject '[Topic] — Usage Update' sharing the key findings and recommended next steps."
  tools: []
  connectors: [email]
  response_format: Email draft ready to send
  tags: [cross-tool, email, sharing, summary]

- section: chain
  category: cross-tool
  title: Join behavioral data with warehouse metrics
  intent: User wants to combine Mixpanel behavioral data with revenue or backend data
  prompt: "Pull feature adoption data for [Feature X] from Mixpanel. Then query ~~data warehouse for revenue per user or account over the same period. Do users who adopt [Feature X] generate more revenue?"
  tools: [Run-Query]
  connectors: [data_warehouse]
  response_format: Adoption vs. revenue correlation with sample sizes
  caveat: Requires matching user/account identifiers across Mixpanel and the warehouse.
  tags: [cross-tool, data-warehouse, revenue, correlation]

- section: chain
  category: cross-tool
  title: Correlate LLM performance with user behavior
  intent: User building an AI product wants to connect model performance with UX outcomes
  prompt: "Pull user engagement metrics for [AI Feature] from Mixpanel — usage frequency, session depth, and any error events. Then check ~~ai observability for latency, error rates, and model performance over the same period. Are model issues correlated with user drop-off?"
  tools: [Run-Query]
  connectors: [ai_observability]
  response_format: Side-by-side performance and behavioral metrics with correlation assessment
  tags: [cross-tool, ai-observability, llm, performance, correlation]
```
