Skip to main content

Auditing Your Pigment Workspace with the Modeler Agent

  • May 12, 2026
  • 0 replies
  • 4 views
Forum|alt.badge.img+1

Who this is for: Pigment customers who want to use the Modeler Agent as a structured co-pilot to review an existing workspace — surfacing modeling issues, performance risks, access-rights gaps, and governance opportunities before making changes.

 

What you'll get: A repeatable, prompt-driven audit across four dimensions: Modeling & Architecture, Performance & Calc Health, Security & Access Rights, and Governance & Best Practices with ready-to-use prompts and example outputs.

 

In a hurry? Skip to the Appendix at the bottom for the copy/paste audit kit or use the Quick start guide (5-10 minutes).

 

Known Limitations

 

Read these before you start so you scope the session correctly.

  • Scope is the open application. The agent can't audit across applications in one session; rerun per app.
  • Runtime performance is inferred, not measured. Validate hotspots with real board loads.
  • No external systems. The agent can't reach into your ERP, HRIS, or CRM. Summarize external logic in the prompt when needed.
  • Permissions bound capability. The agent operates as you — it can't see or change what you can't.
  • Data quality is a human call. The agent can propose reconciliation metrics and validation checks, but it can't judge whether source data is business-correct.

 


 

Before you start

 

  • Confirm the Modeler Agent is enabled. It must be turned on for your workspace (Enable Modeler Agent in Account → Advanced features, plus the org-level CanUseModelerAgent setting). If you don't see it, your Workspace Admin or Primary Owner can request activation via a support ticket.
  • Open the application you want to audit. The agent can only see the application you currently have open. Audit one app per session.
  • Decide your risk tolerance up front. For an audit, you almost always want read-only behavior — covered by the guardrail prompt below.
  • Have a scratchpad ready. A Pigment note, Google Doc, or Notion page to capture findings across the four dimensions. Use the Audit report template later in this doc.

 

What to bring

  • [ ] App name and business purpose
  • [ ] Top 5 slowest metrics by calc time (see Pro tip in the Performance section)
  • [ ] List of sensitive dimensions and access policies
  • [ ] Known pain points from business users
  • [ ] Audit report template open and ready

 

Guardrail prompt — paste this once at the start of every audit session:

You are auditing this application in read-only mode. Do not create, edit, or delete any blocks, metrics, lists, boards, views, or access rights. For every finding, return: (1) what you observed, (2) why it matters, (3) a recommendation, and (4) the risk if no action is taken. Wait for my explicit approval before making any change.

 


 

Quick start (5–10 minutes)

 

Open a fresh Modeler Agent session in the application you want to audit and paste these six prompts in order, one at a time. Wait for each response before sending the next.

 

1. Guardrail (always first)

 

You are auditing this application in read-only mode. Do not create, edit, or delete any blocks, metrics, lists, boards, views, or access rights. For every finding, return: (1) what you observed, (2) why it matters, (3) a recommendation, and (4) the risk if no action is taken. Wait for my explicit approval before making any change.

 

2. Modeling & Architecture — Grand Tour

 

Scan this application's structure and summarize the key lists, metrics, and boards in business language. Group by functional area (e.g. Data Hub, Planning, Reporting). Flag anything that looks like a duplicate, a test block, or an orphan. Return findings as a table: Audit Area | Observation | Severity (Major / Minor / OK) | Recommendation | Risk if no action.

 

3. Performance & Calc Health — Formula scan

 

Review the formulas in this application and flag any likely to be expensive: nested IF chains that could be lookups, formulas aggregating across very large dimensions without filters, redundant sub-expressions across metrics, and Excel/SQL patterns that don't translate to Pigment (e.g. SUM(...BY...) ). For each, return: metric name, summary of the formula, why it's a concern, and a suggested refactor. Use the same Audit Area | Observation | Severity | Recommendation | Risk table.

 

4. Security & Access Rights — Role audit

 

Audit the access rights for this application. For each role or group with access, describe: (1) which blocks they can read vs. write, (2) which dimensions (e.g. Cost Center, Region) are filtered for them, (3) whether that access looks consistent with their apparent business role. Flag any role that appears over-permissioned or under-permissioned. Use the same Audit Area | Observation | Severity | Recommendation | Risk table.

 

5. Governance & Best Practices — Naming & docs

 

Review naming conventions across lists, metrics, and boards. Flag blocks whose names are ambiguous, inconsistent with siblings, or likely to confuse a new modeler. Also flag blocks with no description. Use the same Audit Area | Observation | Severity | Recommendation | Risk table.

 

6. Executive summary

 

Based on everything we've reviewed in this session (modeling, performance, access rights, governance), produce an executive summary with: (1) three-sentence overall health statement, (2) top 5 findings ranked by business risk, (3) recommended 30 / 60 / 90-day actions, (4) items that require a Workspace Admin or Pigment support. Format: Markdown with headings and a ranked table for the top 5 findings.

 

Drop the findings into the Audit report template below. For deeper drill-downs on any section, use the deep-dive toggles in the four-part audit.

 

Every finding should land in this single output shape:

 

Audit Area Observation Severity Recommendation Risk if no action
Major / Minor / OK

 


 

How to prompt the agent well

 

Every prompt should include four things:

 

Element What to include Example
Goal What you want to learn or produce "Audit performance of this application"
Context Which blocks, dimensions, or assumptions to use "Focus on the Revenue Planning app, last 3 months of activity"
Constraints What the agent should not do "Read-only. Do not modify formulas or access rights."
Format Exactly how the output should look "Return a table with columns: Area / Finding / Severity / Recommendation"

 

See the following article for deeper patterns:

 


 

Severity rubric

 

Use these definitions consistently across all four audits.

 

Severity Definition Action timing
Major Causes incorrect results, broken access, or significant performance / maintenance burden. Will get worse with scale. Fix in 0–30 days
Minor Suboptimal but not breaking. Adds friction, inconsistency, or technical debt. Fix in 30–90 days
OK Aligned with best practice. No action needed; document if useful. None

 


 

The four-part audit

 

Four dimensions, run in order:

  • Modeling & Architecture — is the app built right?
  • Performance & Calc Health — does it run well?
  • Security & Access Rights — is the right data in the right hands?
  • Governance & Best Practices — will it stay healthy over time?

Each section follows the same pattern: Goal → Warm-up prompt → Deep dive (in toggle) → Example finding. Run them in order. The guardrail at the top of the session keeps everything read-only — no need to repeat "do not change anything" in every prompt.

 

1. Modeling & Architecture

 

Goal: Find duplicate or unused metrics, inconsistent dimension usage, misaligned granularity, legacy blocks, and apps that have outgrown their original design.

 

Warm-up — Grand Tour

Scan this application's structure and summarize the key lists, metrics, and boards in business language. Group by functional area (e.g. Data Hub, Planning, Reporting). Flag anything that looks like a duplicate, a test block, or an orphan.

 

Deep-dive prompts (architecture review + cleanup scan)

 

Architectural review

Review the modeling architecture against Pigment best practices. Evaluate: (1) Data staging — are raw inputs separated from transformed/modeled data? (2) Dimension reuse — are concepts like Product or Cost Center modeled once or duplicated? (3) Metric granularity — are any metrics dimensioned more finely than needed? (4) Legacy patterns — any spreadsheet carry-overs (e.g. row-by-row FX)? Return a table: Audit Area | Observation | Severity | Recommendation | Risk if no action.

 

Cleanup scan

Identify metrics and lists that look like duplicates or leftover test blocks. For each candidate, confirm: (a) no downstream dependencies, (b) not used in any board, (c) not used in any view, (d) not shared to other libraries. Produce the list for my review.

 

Example finding

Audit Area Observation Severity Recommendation Risk if no action
FX Conversion Legacy row-by-row logic in Revenue_USD Major Replace with Pigment FX best-practice pattern Maintenance burden, incorrect restatements

 


 

2. Performance & Calc Health

 

Goal: Find expensive formulas, over-dimensioned metrics, long dependency chains, and boards that load slowly.

 

Warm-up — Formula scan

Review the formulas in this application and flag any likely to be expensive: nested IF chains that could be lookups, formulas aggregating across very large dimensions without filters, redundant sub-expressions across metrics, and Excel/SQL patterns that don't translate to Pigment (e.g. SUM(...BY...)). For each, return: metric name, summary of the formula, why it's a concern, and a suggested refactor.

 

Deep-dive prompts (board load + agent self-check)

 

Board & view load

List the boards most likely to feel slow based on the number and complexity of metrics they display. Suggest three concrete optimizations per board.

 

Agent self-check

What signals can you inspect in this application (formula complexity, dimension depth, dependency chains)? What performance issues can you not detect?

 

Pro tip — feed the agent your real heaviest metrics:

  1. Open the All Blocks view, enable the Calc time column, sort descending.
  2. Copy the top 5 metric names and paste this prompt:

Here are the heaviest metrics by calc time: @Metric1, @Metric2, @Metric3, @Metric4, @Metric5. For each, review the formula and dimensionality, explain why it's expensive, and propose a concrete refactor. Prioritize by expected calc-time impact.

 

Example finding

Audit Area Observation Severity Recommendation Risk if no action
@Forecast_Adj Nested 7-level IF over full Product × Region grid Major Replace with mapping list; pre-aggregate by Region Board load time grows linearly with Product count

 

The agent infers likely issues from structure; it can't measure runtime. Validate top findings by opening the suspect blocks/boards and checking actual load times.

 


 

3. Security & Access Rights

 

Goal: Find over-permissive access, missing row-level restrictions on sensitive dimensions, inconsistent role assignments, and gaps between business intent and technical configuration.

 

Warm-up — Role audit

Audit the access rights for this application. For each role or group with access, describe: (1) which blocks they can read vs. write, (2) which dimensions (e.g. Cost Center, Region) are filtered for them, (3) whether that access looks consistent with their apparent business role. Flag any role that appears over-permissioned or under-permissioned. Return a table.

 

Deep-dive prompts (sensitivity scan + capability check)

 

Sensitivity scan

Which blocks contain typically sensitive data (headcount, compensation, margin, customer-level detail)? For each, describe the current access pattern and whether it looks appropriately restricted.

 

Capability check

Can you inspect access rights only in this application, or across the workspace? What can you help me with on access rights, and what can't you help with?

 

Example finding

Audit Area Observation Severity Recommendation Risk if no action
Headcount_Detail "Regional Planner" group has write access; no Cost Center filter Major Restrict to read; add Cost Center row-level filter Cross-region visibility into compensation data

 

The agent acts as you. It can only see access rights you can see. If the agent reports "I can't inspect that," escalate to your Workspace Admin — the limitation is permissions, not the agent.

 


 

4. Governance & Best Practices

 

Goal: Find naming inconsistencies, undocumented blocks, orphaned applications, missing owners, and drift from Pigment's recommended patterns.

 

Warm-up — Naming & docs

Review naming conventions across lists, metrics, and boards. Flag blocks whose names are ambiguous, inconsistent with siblings, or likely to confuse a new modeler. Also flag blocks with no description.

 

Deep-dive prompts (best-practice alignment + lifecycle)

 

Best-practice alignment

Compare this application to Pigment's published best practices for enterprise modeling. Assess: separation of concerns (data load vs. modeling vs. reporting); reusable vs. app-local dimensions; transaction lists for facts vs. planning lists for inputs; versioning and scenario handling. Rate each area Aligned / Partially Aligned / Not Aligned, with a one-sentence justification and the single highest-impact improvement.

 

Ownership & lifecycle

Identify blocks that appear unused, undocumented, or unowned. Suggest a remediation plan per block: archive, document, reassign, or migrate.

 

Example finding

Audit Area Observation Severity Recommendation Risk if no action
Naming Three metrics named Rev, Revenue_v2, Total Rev with overlapping logic Minor Standardize on Revenue_Net; deprecate the others End-user confusion; inconsistent reporting

 


 

Consolidating the audit

 

After running all four sections, ask the agent to pull everything together:

 

Executive summary prompt

Based on everything we've reviewed in this session (modeling, performance, access rights, governance), produce an executive summary with: (1) three-sentence overall health statement, (2) top 5 findings ranked by business risk, (3) recommended 30 / 60 / 90-day actions, (4) items that require a Workspace Admin or Pigment support. Format: Markdown with headings and a ranked table for the top 5 findings.

 


 

Tips for getting reliable results

 

  1. Use Plan mode for any remediation. After the audit, ask: "Propose a plan to implement [finding #3]. Don't execute until I approve." Review in the Plan panel before approving.
  2. Iterate, don't restart. If a finding feels off, refine: "Re-evaluate finding #2 assuming Headcount excludes contractors." Don't lose context.
  3. Provide the unwritten rules. Lead with context like "FX rates are monthly averages" or "Q1 FY26 is the first closed quarter."
  4. Mention blocks by name with @. @Revenue is sharper than "the revenue metric."
  5. Confirm before destructive action. Ask the agent to list exactly what will change, then approve item by item.

 


 

Audit report template

 

Copy this into a Pigment note, Google Doc, or Notion page at the start of the session.

 

Application: ___

Auditor: ___

Date: ___

 

Executive summary (3 sentences)

Add health statement here.

 

Top findings

# Area Finding Severity Recommendation Owner Status
1            
2            
3            
4            
5            

 

  • 30-day actions:
  • 60-day actions:
  • 90-day actions:
  • Needs Admin / Support:

 


 

Troubleshooting — "the agent says it can't"

 

If the agent reports it cannot see or change something, the cause is almost always one of:

  • Permissions — the agent acts as you. If you can't see it, neither can the agent. Escalate to your Workspace Admin.
  • App scope — the agent only sees the open application. Open the right app and rerun.
  • Missing context — the agent needs your business assumptions. Restate the rule (e.g. "FX is monthly average") and retry.
  • Modeler Agent not enabled — feature flag is off. Workspace Admin or Primary Owner files a support ticket. </aside>

 


 

Appendix — copy/paste audit kit

  • Full prompt sequence (paste one at a time)
    1. Guardrail: You are auditing this application in read-only mode. Do not create, edit, or delete any blocks. For every finding, return: (1) observation, (2) why it matters, (3) recommendation, (4) risk if no action. Wait for my approval before any change.
    2. Grand Tour: Scan this application's structure and summarize the key lists, metrics, and boards in business language. Group by functional area. Flag duplicates, test blocks, and orphans.
    3. Architecture: Review modeling architecture against Pigment best practices (staging, dimension reuse, granularity, legacy patterns). Return a table.
    4. Cleanup candidates: Identify likely duplicate or test blocks. Confirm no dependencies, board usage, view usage, or library sharing.
    5. Performance — formulas: Flag likely-expensive formulas (nested IFs, wide aggregations, redundant sub-expressions, Excel-style patterns). Suggest refactors.
    6. Performance — boards: List the boards most likely to feel slow and suggest three optimizations each.
    7. Access rights: Audit roles and groups: who can read/write what, which dimensions are filtered, where access is over- or under-permissioned.
    8. Sensitivity scan: Identify sensitive blocks and describe whether access looks appropriately restricted.
    9. Naming & docs: Review naming conventions and flag ambiguous names and undocumented blocks.
    10. Best-practice alignment: Rate the app (Aligned / Partially / Not Aligned) across separation of concerns, reusable dimensions, fact vs. planning lists, scenario handling.
    11. Ownership & lifecycle: Identify unused, undocumented, or unowned blocks. Propose archive/document/reassign/migrate per block.
    12. Executive summary: Produce a summary with a health statement, top 5 risk-ranked findings, 30/60/90-day plan, and items needing Admin/support.

 

Next step: After running the audit, bring the executive summary to your SAM or to your next PSP expert hour — it's the fastest way to turn findings into a prioritized remediation plan.