Purpose
Help users get consistent, debuggable results when querying Pigment through an MCP connector (currently ChatGPT and Claude).
Quick summary (what matters most)
- Be explicit: name the workspace, application, and metric.
- The LLM is trying to match your words to object names, so clear naming wins.
- The LLM will only query metrics that are enabled for AI
- Verify that the metrics you need to query are enabled. Avoid enabling multiple metrics with similar names.
- Shared metrics across applications are a common failure mode today: if a metric is actually shared from another app, you may need to specify the source app.
- Ask for transparency: have the LLM state which app, metric, filters, and snapshot it used.

- Access is permissioned: MCP respects Pigment access rights. Users can only query applications/metrics they already have permission to view.
- Requires Pigment AI enablement: MCP relies on Pigment AI to query data.
- Whitelisted AI systems: only approved systems can connect (currently ChatGPT and Claude).
- Shared metric support is not fully smooth yet: shared metrics can be confusing for the tool and for end users. Support is on the roadmap.

- Confirm the connector is active in your chat (select the Pigment connector before asking).
- Confirm you have access to the app/metrics you want (permissions still apply).
- Ask the assistant:
- “List the applications I can access in this workspace.”
- “List the metrics available in application <App Name>.”
- “Which application is metric <Metric Name> from?” (especially important if metrics are shared)

1) Start with discovery (especially for first-time setup)
If you are not sure what the connector can “see” yet, start by asking it to enumerate what is available.
- “List the applications I can access in this workspace.”
- “For application <App>, list the metrics available for analysis.”
This quickly surfaces the most common setup issue: the metric you want is not enabled for AI (and therefore cannot be queried via MCP).
2) Use a two-step workflow: retrieve first, analyze second
A reliable pattern (and what we demo’d):
- Pull a defined dataset from Pigment (specific metric, dimensions, time range, filters).
- Ask the LLM to do something with the extracted dataset:
- draft an executive narrative,
- generate hypotheses and follow-up questions,
- build a checklist of anomalies to investigate,
- combine the dataset with information from another connected system (e.g., docs or goals) to assess performance.
3) Always specify “where to look”
Include:
- Workspace
- Application
- Metric name(s)
- Time period
- Dimensions and filters
If you don’t specify the application, the assistant may search across many apps and get confused.
4) Be explicit, and optimize naming to reduce ambiguity
Because matching is often name-based:
- Prefer unique metric names.
- Avoid many metrics with nearly identical names.
- Use the exact metric name as it appears in Pigment.
5) Control what is queryable by curating the “AI-enabled” metric list
In many workspaces, enabling fewer, cleaner, business-ready metrics improves result quality and reduces incorrect matches.
6) Watch for shared metrics (common root cause of “no answer”)
If the metric you want is shared from another app, the assistant may look in the wrong place.
Workaround pattern:
- “Which app is metric <Metric> from?”
- Re-run the query, explicitly referencing that app + metric.
7) Ask for transparency (debuggability)
Add a final instruction like:
- “In your final answer, include the application name, metric name(s), filters, dimensions, and the snapshot used for each number.”
8) Keep an experiment mindset (this is new)
Treat early runs as testing and refinement. If something looks off, ask the assistant what it queried, then tighten scope and rerun.

Use these as-is, then replace placeholders.
- Sanity check (first run):
- “In workspace <Workspace>, list the applications I can access. Then, for application <App>, list the metrics available for analysis.”
- Quick business question:
- “In workspace <Workspace>, app <App>, use metric <Metric>. Show it by <Dimension> monthly for <Time range>. Filters: <Filters>. Include app, metric, filters, and snapshot in the output.”
- Variance / narrative:
- “Pull Actuals vs Budget for <Metric> for <Time range> grouped by <Dimension>. Then write a 6-sentence exec summary and list the top 5 drivers.”
- Forecast outside Pigment:
- “Retrieve monthly revenue for the last 24 months, then compute a statistical forecast that accounts for trend and seasonality, and show the next 12 months.”
- Combine with another source:
-
“Step 1: Read this doc (goals / OKRs).
Step 2: Pull the relevant metric(s) from Pigment.
Step 3: Tell me whether we are on track vs the targets, and why.”
-

Template A — quick metric pull
In workspace <Workspace>, application <App>, use metric <Metric>. Show <Dimension> by <Time grain> for <Time range>. Apply filters: <Filters>.
In your final output, include the application name, metric name, filters, and snapshot used.
Template B — “what can I query?” discovery
In workspace <Workspace>, list the applications I can access. Then, for application <App>, list the available metrics that look relevant to <business question>.
Template C — shared metric troubleshooting
I’m trying to use metric <Metric>. Which application does it come from? If it’s shared, tell me the source app. Then run: <specific query>, explicitly using the correct source application.
Template D — retrieve then analyze
Step 1: Pull the dataset from workspace <Workspace>, app <App>, metric <Metric>, for <time range>, grouped by <dims> with filters <filters>.
Step 2: Based on the extracted dataset, write a 5-bullet executive summary and 3 follow-up questions.

- Standardize and de-duplicate metric names.
- Document “source of truth” apps for key metrics.
- Reduce unnecessary sharing of similarly named blocks across apps.
- Periodically audit complex workspaces to improve discoverability.

