Introduction
Building something that works is not enough in Pigment. The goal is to build it right and to optimise for performance from the first design decision to the last. This guide consolidates best practices from the Pigment Modeling Palette into a single reference, updated with practical examples and annotated with the reasoning behind each recommendation. A performant Pigment model is fast, scalable and maintainable: a lean engine that delivers results quickly and gets used daily.
The series covers every factor that affects performance: core dimensional architecture and sparsity; scoping and formula efficiency; refactoring, snapshots, transaction lists and import design; size estimation, version management, testing and user adoption. Each part includes concrete Workforce Planning and FP&A examples, comparisons of inefficient versus optimised formulas with estimated performance impacts, and a consolidated best practices versus pitfalls reference table in Part 4.
Who this guide is for. Expert Pigment modelers, solution architects and EPM professionals who want a definitive reference for building applications that remain fast as they scale, survive organisational change and earn the daily trust of the people who use them.
- Part 1 · Architectural Foundations: Dimensions, Sparsity and the Discipline of Scope
- Part 2 · Calculation Mastery: How to Write Optimised Formulas That Scale
- Part 3 · Lifecycle Performance: Views, Snapshots, Testing and Model Hygiene
- Part 4 · Performance in Action: Workforce and FP&A Patterns with Practical Reference
Calculation Mastery: How to Write Formulas That Scale
A performant model is a maintained system. The pattern is: design lean, build scoped, test under load, then maintain. This part covers the view-versus-metric decision rule, version discipline, testing strategy and the tooling you use to measure performance in production.
What This Part Covers
Part 3 covers the decisions and practices that determine whether performance holds up over the full lifecycle of a model. The key decisions covered here are:
- When to build a metric versus when to keep a calculation in the view layer, and why the boundary matters for recalculation overhead.
- How to use snapshots and version discipline to keep the live model lean as history accumulates.
- How to test performance under realistic conditions before go-live, including concurrency and edge cases.
- How to use Performance Insights and the Profiler to diagnose and measure performance in production.
- How to maintain model hygiene over time by distinguishing blocks that are referenced from blocks that contribute to a business output.
1. Views Before New Metrics
A common mistake is to create a new metric for every new requirement. Each new KPI request adds structure. Over time, the model becomes heavier, harder to audit and slower. The first question should not be "what block should I create?" but "does this information already exist?"
Before creating a new metric, ask three questions: Does this data already exist somewhere in the model? Does it exist at a finer granularity and therefore already aggregate automatically? Is this truly new logic, or just a different way of presenting existing data?
Very often, what appears to be a new KPI is a ratio between two existing metrics, a specific aggregation, a count, an average or a filtered view of existing data. None of those require a new metric.
View-Level Calculations: Calculated Items and Show Values As
Pigment views let you compute values on the fly using Calculated Items and Show Values As. Ratios, sums, averages, counts, first/last values and alternative aggregation methods can all be built inside the view layer, without increasing the number of blocks in the model. Item Variables extend this further: they can be reused across Show Values As and Calculated Items, and any update applies everywhere they are deployed, giving you parameterisable reporting logic with no model-level overhead.
The Referenceability Decision Rule
| Decision rule: If the result of a calculation will later be referenced in another formula, it must exist as a metric. It is part of the calculation chain. If the result is purely for reporting and will not be reused downstream, build it in the view layer using Calculated Items or Show Values As. |
Every additional block increases structural complexity. Every unnecessary metric adds potential recalculation overhead, increases cognitive load and complicates maintenance. Note that Show Values As results cannot be referenced by other metrics via formulas. Knowing this boundary is what makes the decision rule work in practice.
2. Snapshots and Slices: Managing Data Over Time
A performant model is also about keeping the live model lean. Snapshots and data slices are lifecycle performance tools, not just audit tools.
Use Snapshots to Offload Historical Data
Snapshots freeze an application's state at a point in time, capturing metrics, dimensions and boards as read-only data. At the end of each planning cycle, taking a snapshot lets you trim the live model without losing access to the past.
Example: At year-end, snapshot the application ("Actuals FY2025"). Going into FY2026, remove the closed year from the Calendar and drop those Actuals from the live metrics. All calculations for new inputs now run only for FY2026 data. The live model's size drops proportionally and so does every calculation that spans the time dimension.
Version Dimension vs. Native Scenario
The native Pigment Scenario feature should not be the primary mechanism for handling core versions like Actual, Budget and Forecast.
- Use a standard Version dimension for core planning cycles. This gives you explicit control: Actuals can be loaded imports that never recalculate, Budget can be static inputs, Forecast can be formula-driven. Each version can be snapshotted and archived independently.
- Reserve the native Scenario feature for ad-hoc what-if sandboxing. Scenarios are well suited for isolated hypothesis testing without changing model structure, but using them as a substitute for a Version dimension limits flexibility and can cause more data to be recalculated than necessary.
Version isolation example (FP&A): When a growth assumption changes, only the Forecast metric recalculates, and only for future periods. Actual Revenue is untouched because it is stored data. In a monolithic approach where one formula handles all versions via nested IF conditions, Pigment evaluates the full conditional for every version on every change, including Actuals that will never change.
Data Slices for Controlled Comparisons
Pigment's Data Slices let you define a specific combination of dimension items (a Version, a Scenario, a specific period) and expose it as a named slice that can be referenced in views and calculations. A slice does not duplicate data; it is a pointer to an existing subset of the model, or to a snapshot.
Example: You have snapshotted "Forecast June" and want to compare it against the current "Forecast September" on a single board. Create a Data Slice for the snapshot pointing at the June version and another pointing at the current September version. A Calculated Item in the view can then compute the variance between the two slices on the fly, without storing a separate variance metric in the live model. This is more flexible than pre-computing a variance metric and avoids adding recalculation overhead to every model change.
3. Testing Under Real Conditions
A model that calculates quickly for one person with a small dataset is not necessarily a performant model. Testing under realistic conditions is the only way to know whether the architecture and formula choices actually hold up.
Use Production-Like Data Volumes
As soon as the model structure is in place, load it with as much data as you expect in real use. For a sales planning model, import a few years of sales data; for workforce, load thousands of employees. This surfaces performance issues while there is still time to address them and allows you to verify that your sparsity assumptions match the actual data.
Increase Concurrency
A model that is fast for one user may slow down considerably when 50 users are triggering calculations simultaneously. Run a concurrency test: have several people performing tasks in parallel, entering data, changing selectors, running processes. If one user's large import blocks others, consider scheduling those heavy tasks for off-peak times or splitting imports into smaller batches.
Test Typical User Flows
Identify the key interactions users will have: entering a number in a planning grid, running a top-down allocation, changing a scenario selector, loading a dashboard with heavy calculations. Simulate those interactions with a timer. Anything more than a few seconds for a common action will hurt adoption. If certain actions are slow, profiling (covered in the next section) will trace which metric is responsible.
Test Edge Cases
Test worst-case inputs: a user pasting 1,000 new items, a scenario copy that writes to every cell in that scenario, an end-of-quarter period where everyone enters data simultaneously. Ensure the model can handle peak load, not just average conditions.
Iterate and Optimise
The testing process will reveal bottlenecks that were not obvious from inspecting the model. Treat it as an iterative loop: optimise the hotspots, then test again under the same conditions. The earlier you do this relative to go-live, the less expensive the fixes.
| Worth noting: Performance problems found during a controlled test cost a fraction of what they cost to fix in production, with users waiting. |
4. Performance Insights and Profiling: Measuring What Matters
These two tools complete the toolchain introduced in Part 2. The Formula Playground helps while you are writing; Performance Insights and Profiling tell you what is actually happening once the model is live.
Performance Insights
Performance Insights operates at the model level. For each metric, it exposes two key indicators you should check regularly:
- Maximum size: the theoretical maximum number of cells the metric could hold given its dimensional structure. If this number is unexpectedly high, it means the dimensional architecture is adding unnecessary cardinality, even if the metric is currently sparse. A maximum size that is orders of magnitude larger than the current populated size is a signal to review the metric's dimensional structure.
- Density: the ratio of populated cells to maximum size. A metric with very low density is healthy. A metric whose density is rising over time may indicate a densification problem upstream (a formula filling zeros, an import loading nulls). A metric approaching high density on a large maximum size is the most expensive category and the first place to focus optimisation.
Use Performance Insights proactively, not only when users are reporting slowness. A metric that shows unexpectedly high cell counts or rising density on a small dataset is a warning sign that will compound as data grows.
Profiling
Profiling captures the calculation path for a specific user action and shows which blocks were slow to calculate following that action. This is the most targeted diagnostic tool available. After each simulated user flow in testing, run a profiling capture to see exactly where time was spent. Updates are available for profiling for three days, so profile immediately after each key test, not days later.
The Three Tools in Context
The three tools map onto distinct phases of work:
- Formula Playground (Part 2): while writing. Validate structure and catch scope and alignment issues before committing.
- Performance Insights: during build and testing. Check maximum size and density for your heaviest metrics. Use on a regular schedule, not only when something feels slow.
- Profiling: after user actions in test and production. Pinpoint which block is responsible for delay in a specific user flow. Repeat after each optimisation round to confirm the change made a measurable difference.
5. Maintenance: Delete What Is Not Used
Performance is directly linked to how lean the model remains over time. A model grows naturally: metrics are added, boards are created, intermediate logic accumulates. Without discipline, that growth adds weight without adding value.
Board Hygiene First
Over time, boards are created for testing, prototyping and temporary analysis. Many never become part of the actual business process. Keeping them increases clutter, reduces clarity and makes governance more difficult. Keep only boards that are actively used in real processes. Delete test boards, unfinished drafts and obsolete views. If a board is not used by users and not part of a defined workflow, it should not remain in the application.
Block Hygiene: Referenced Does Not Mean Useful
A block may appear "used" because it is referenced by another block. That does not mean it contributes to the final model output. If an entire calculation chain is technically connected internally but never feeds a metric displayed on a live board, the whole chain is dead weight.
Pigment's Block Explorer identifies whether a block is technically referenced. The deeper question is whether that block ultimately feeds a metric that is exposed on a board and used by users. If the answer is no, the entire logic chain should be reviewed. The dependency diagram helps trace these connections.
Perform this review regularly. Ask: What was created? By whom? For which purpose? Is it still needed? Could we have reused something existing? Is there unfinished logic that should be completed or removed?
| Key point: Lean architecture is not achieved once at the start of a build. It requires ongoing attention. A model that is never reviewed for dead weight will degrade over time. |
Part 3: Key Takeaways
- Before creating a new metric, decide whether the requirement is reporting-only or needs reuse downstream. If reporting-only, use Show Values As, Calculated Items and Item Variables in the view layer.
- Snapshot validated cycles to create read-only reference points and trim the live model. Keep the Version dimension restricted to active planning cycles.
- Use a standard Version dimension for core planning cycles. Reserve the native Scenario feature for ad-hoc what-if sandboxing only.
- Test with production-like data volumes and concurrency before go-live. Performance problems at scale are far cheaper to fix before users arrive.
- In Performance Insights, review maximum size and density for your heaviest metrics. Rising density or unexpectedly large maximum size are the two indicators that require action.
- Profile immediately after test actions: the profiling window is only 3 days. Repeat after each optimisation round to confirm the change made a difference.
- Run board hygiene reviews followed by block hygiene reviews. "Referenced" and "contributing to a business output" are not the same thing.

