Introduction
Building something that works is not enough in Pigment. The goal is to build it right and to optimise for performance from the first design decision to the last. This guide consolidates best practices from the Pigment Modeling Palette into a single reference, updated with practical examples and annotated with the reasoning behind each recommendation. A performant Pigment model is fast, scalable and maintainable: a lean engine that delivers results quickly and gets used daily.
The series covers every factor that affects performance: core dimensional architecture and sparsity; scoping and formula efficiency; refactoring, snapshots, transaction lists and import design; size estimation, version management, testing and user adoption. Each part includes concrete Workforce Planning and FP&A examples, comparisons of inefficient versus optimised formulas with estimated performance impacts, and a consolidated best practices versus pitfalls reference table in Part 4.
Who this guide is for. Expert Pigment modelers, solution architects and EPM professionals who want a definitive reference for building applications that remain fast as they scale, survive organisational change and earn the daily trust of the people who use them.
- Part 1 · Architectural Foundations: Dimensions, Sparsity and the Discipline of Scope
- Part 2 · Calculation Mastery: How to Write Optimised Formulas That Scale
- Part 3 · Lifecycle Performance: Views, Snapshots, Testing and Model Hygiene
- Part 4 · Performance in Action: Workforce and FP&A Patterns with Practical Reference
Architectural Foundations: Dimensions, Sparsity, Scale and the Discipline of Scope
Performance in Pigment is engineered through architecture, not through last-minute formula tweaks. The decisions you make before writing a single formula determine everything downstream.
What This Part Covers
Part 1 establishes the three structural levers that govern performance at the model level: dimensional architecture, sparsity, and scope. It also introduces size estimation as a design-phase validation step. The key decisions covered here are:
- Which dimensions belong in a metric structure and which belong in properties or mapping metrics.
- How sparsity is preserved through structural and formula choices, and why densification is a performance cost, not a formatting convenience.
- How Pigment's scoped calculation system works, what breaks it, and how to design calculation chains that keep changes local.
- How to estimate cell volume before building and use that estimate to validate the dimensional architecture.
1. Core Dimensional Architecture: Keep It Lean and Relevant
Each metric's dimensional structure should include only the core dimensions required for the calculation. Extra dimensions inflate the possible cell count (cardinality) and add complexity without benefit. Pigment's engine is sparse, but unnecessary dimensions still bloat the model's theoretical size and complicate formulas and long-term maintenance.
Identify Core Dimensions
A planning model's core dimensions typically cluster into four families:
- Time: the calculation grain. Month is common, but the right grain is the one your logic requires. If the business process plans weekly, Week is core. If the logic is monthly and you only want Week for reporting, Week should not become structural. Year is almost always a property of Month. Do not add it as a separate structural dimension.
- Organisation: the entry point for security and planning ownership, typically Cost Center, Business Unit or Legal Entity. This is also the primary axis for data access rights configuration.
- Version: the dimension that separates alternative sets of assumptions and results. A standard Version (also named Scenario or Planning cycle) dimension handles Actual, Budget and Forecast while keeping the model structure consistent. The native Pigment Scenario feature serves a different purpose, covered in Part 3. Read this article to learn more about the best practices for versioning in Pigment.
- Business grain: the dimension at which the logic operates. Employee for workforce planning, SKU or Product for supply chain, Account for FP&A.
The modelling rule is straightforward: only add a dimension if your calculation logic requires it. Each additional dimension increases the potential intersections and the complexity of the model, even in a sparse system where only populated combinations are stored (read this article to learn more about Pigment sparse engine).
Exclude Property Dimensions
If a dimension is essentially a property of another, do not include it as a structural dimension in the metric. Use Pigment's ability to reference properties in formulas instead.
Example: If each Employee has a Department attribute, do not add Department as a structural dimension alongside Employee. An employee-level metric can still be aggregated by department through a formula using the Employee-to-Department relationship, without adding Department to the metric's structure. Cardinality stays much smaller as a result.
Minimise Cardinality
Cardinality is the total number of possible cell combinations. Every extra dimension or unnecessary item multiplies the cell count. A metric by Employee (1,000 items) and Month (36 items) has 36,000 possible cells. Adding Department (10 items) brings it to 360,000, a 10x increase, even though each employee belongs to only one department and 90% of those cells will always be blank. Pigment does not store empty cells, but calculations can still evaluate a larger set of potential intersections depending on the formula logic. Keeping the structure lean reduces the scope of computation and improves performance.
Use Mapped Dimensions for Reporting
Pigment supports mapped dimensions in views: you can display data by a related dimension without that dimension being in the metric's structure. In a workforce model, you might store Employee Salary by Month and use a mapping metric to show results by Contract Type or Region in a view, without adding those dimensions to the metric structure. This avoids structural bloat driven purely by reporting needs.
Mapping Properties vs. Mapping Metrics
Not all relationships between dimensions are static. Pigment distinguishes two mechanisms:
- Mapping properties define a fixed, point-in-time relationship. An employee's home country, for example, is stable enough to be a property. These are computed once and cheap to reference.
- Mapping metrics define a dynamic relationship involving multiple dimensions, often including Time. If an employee can change departments or cost centres across months, that relationship should be modelled as a mapping metric rather than a static property. Mapping metrics let Pigment resolve the relationship correctly at each point in time.
Using the wrong mechanism has both a correctness cost (wrong numbers) and a performance cost (unnecessary recomputation or densification). The decision between the two belongs in the architecture phase.
| Architecture principle: Keep each metric's structure restricted to core dimensions. Use mapped dimensions or properties to project values into reporting groupings, rather than pushing reporting groupings into core metric structures. |
2. Embracing Sparsity: Let Blanks Stay Blank
Pigment is a sparse engine. Sparsity means Pigment only processes cells that contain data; blanks are skipped. A sparse model will naturally run faster and use less memory than a dense one. Preserving sparsity requires both structural and formula-writing discipline.
Do Not Fill Zeros or FALSE by Default
Using formulas like IF(condition, result, 0) turns blanks into actual values, forcing the engine to handle every cell. Let the "else" be blank instead. Use IF(condition, result) by itself (Pigment outputs blank when the condition is false) or write IF(condition, result, BLANK) explicitly. The performance difference can be significant: computing 1% of cells is roughly 100x faster than computing them all.
Use ISDEFINED Instead of ISBLANK
ISBLANK() and ISNOTBLANK() return a boolean (TRUE/FALSE) for every possible cell, densifying a metric by turning blank cells into FALSE. ISDEFINED(x) returns TRUE for cells with data and blank for cells without data, preserving sparsity. Similarly, IFBLANK(A, B) fills blanks only where A is blank, whereas IF(ISBLANK(A), B, A) produces a TRUE/FALSE check everywhere first. Use Pigment's sparse-friendly functions wherever possible.
Guard Aggregations with Conditions
An [ADD: Dimension] or [REMOVE: Dimension] modifier without conditions evaluates across all members of that dimension, even where most have no data. Wrap heavy operations in an IFDEFINED guard:
// Evaluates every Region regardless:
Metric[ADD: Region]
// Only where Metric is defined:
IFDEFINED(condition, Metric, BLANK)
The guard restricts or extend the calculation scope to the intersections defined by the condition. It ensures the engine only computes where data is expected, avoiding unnecessary evaluation on empty slices of the model.
Clean Source Data at the Entry Point
If your source system exports zeros or placeholder values, filter them out on import or in staging metrics. A [FILTER: currentvalue <> 0] after a Transaction List load keeps downstream calculations sparse. Avoid using 0 as a default input in assumption metrics; use blank as the default and only input values where they genuinely exist.
| Key point: A metric that is 10% populated could execute roughly 10x faster than the same metric fully populated, all else equal. Blank cells are not missing data. They are the model doing less work on purpose. |
3. Scope and Dimensional Alignment: Model with Scope in Mind
When something changes in Pigment (an input, an import, a selector), the engine recalculates only the affected portions of the model. This is the scoped calculation system. Keeping calculations scoped means that small changes trigger small recalculations, which is the foundation of a responsive model under real-world usage.
What Breaks Scope
If a user updates data for one department, ideally only that department's slice of each dependent metric recalculates. Pigment manages this dependency graph and parallelises calculations where possible. Certain formula patterns break that scope:
[REMOVE: Department]aggregates across all departments. Any metric depending on that result must recalculate for every department when any one department's data changes.- Iterative time functions
PREVIOUS,PREVIOUSOF,CUMULATEandDECUMULATElose scope on the time dimension. These functions need to be isolated in dedicated metrics to contain the broadcast cost. - Functions with an output scope of None force broader computation than intended. Where such functions are unavoidable, isolate them in a dedicated metric so they do not cascade unrelated recalculations downstream.
Scope as an Architectural Constraint
Scope is not just a formula property. It is an architectural design constraint. Structure calculation chains so that a change to one entity recalculates only the intersection directly affected. When you must break scope (for example, to produce a company-wide total), do it as late as possible in the calculation chain and in a dedicated aggregation metric that is clearly separated from the detail-level metrics. The aggregation recalculates; the detail chain underneath it does not.
| Best practice: Use filtering and conditional logic per item (which remains scoped) before resorting to global removals. If you must remove a dimension, isolate that calculation so it does not force unrelated recalculations downstream. Model design should also account for input workflows. The way data is entered determines how scope propagates through the dependency chain, so calculation structure and model architecture must be aligned to ensure changes remain as localized as possible. |
Transaction List Architecture: A Structural Decision
Transaction Lists are the event-level ingestion layer in Pigment: GL lines, sales orders, HR events. From an architectural standpoint, they sit between raw source data and your planning metrics and should be treated as a distinct structural layer. Three architectural decisions define how that layer performs:
- One staging metric per numeric property. Create a single metric that performs the initial aggregation from the Transaction List across all relevant dimensions (Account, Department, Month). All downstream planning metrics reference that staging result. The expensive scan of the raw list is done once.
- Minimal computed properties on the list itself. Transaction List properties recompute sequentially, one property at a time, whenever the list is updated. Keep heavy logic in metrics, not in list properties.
- Filter zeros at load time. If the source system exports zero-value lines, add a filter condition on the import or in the staging metric to exclude them. Zero-value lines that pass through into planning metrics will densify everything downstream.
- Keep only necessary properties. Only retain properties in the Transaction List that are required for calculations or analysis. Avoid keeping fields “just in case”, as each additional property increases processing overhead and impacts model performance.
The formula mechanics of these patterns are covered in Part 2. The point to anchor in architecture is that the Transaction List is a source layer, not a calculation layer. Treat it accordingly.
4. Size Estimation and Cell Volume: Plan for Scale
Estimating the cell volume of a model during design helps you validate the dimensional architecture before it is built. Discovering a cardinality problem after go-live is avoidable.
Estimate Cell Volume Before You Build
Cell volume is straightforward to estimate: multiply the number of items in each core dimension. For an employee-level workforce metric:
1,000 employees × 36 months × 2 versions = 72,000 possible cells
Add Department (10 items) unnecessarily:
1,000 × 10 × 36 × 2 = 720,000 possible cells (10× larger)
This simple estimation should be done for what be eventually become the heaviest metrics before building them. A large number of potential cells is not an issue by itself in a sparse model, but it signals higher structural complexity. Use it to assess whether all dimensions are justified and to anticipate how sparsity and calculation scope will impact actual performance.
Plan for Data Growth
Performance can degrade non-linearly as data grows. A formula running in 1 second with 100,000 cells may not scale proportionally to 1 million cells. Always ask what happens if the data doubles. If a formula is already near the limit at current scale, rework it before the data grows further. Pigment's calculation timeout is a hard limit, not a target to work toward.
Optimise High-Volume Metrics First
A metric with the largest dimension lists will typically be the performance bottleneck. Focus architectural and formula optimisation efforts there first. Ensure it is sparse, its dimensional structure is the minimum necessary, and its formulas apply the reduce-first principle from Part 2. A small assumption metric by Year and Version only will not cause performance problems at any scale.
Validate Size with Realistic Test Data
Before going live, load a realistic volume of data to test how the model performs. If you only test with 100 employees and go live with 1,000, you will miss bottlenecks. Use Pigment's Performance Insights to see how many cells are being calculated and how dense the heaviest metrics are. If a metric shows unexpectedly high cell counts at realistic scale, investigate and fix it before users arrive.
Part 1: Key Takeaways
- Keep each metric's structure restricted to the core dimensions required for the logic of the calculation.
- Use mapped dimensions or properties for reporting rather than adding reporting dimensions into metric structures.
- Decide early whether a relationship is static (property) or time-varying (mapping metric). The wrong choice has both correctness and performance costs.
- Design for sparsity: prefer
ISDEFINEDoverISBLANK, guard aggregations with conditions, filter zeros at the source. - Keep scope intact as long as possible. When you must break it with a global aggregation, isolate that operation in a dedicated metric.
- Treat the Transaction List as a source layer. Aggregate it once into a staging metric. Keep heavy logic in metrics, not in list properties.
- Estimate cell volume during design for your heaviest metrics. If the number is alarming, fix the architecture before building.

