Skip to main content

Performance in Action: Workforce and FP&A Patterns with Practical Reference

  • April 16, 2026
  • 0 replies
  • 3 views
Gawain
Employee
Forum|alt.badge.img+9

Introduction

 

Building something that works is not enough in Pigment. The goal is to build it right and to optimise for performance from the first design decision to the last. This guide consolidates best practices from the Pigment Modeling Palette into a single reference, updated with practical examples and annotated with the reasoning behind each recommendation. A performant Pigment model is fast, scalable and maintainable: a lean engine that delivers results quickly and gets used daily.

The series covers every factor that affects performance: core dimensional architecture and sparsity; scoping and formula efficiency; refactoring, snapshots, transaction lists and import design; size estimation, version management, testing and user adoption. Each part includes concrete Workforce Planning and FP&A examples, comparisons of inefficient versus optimised formulas with estimated performance impacts, and a consolidated best practices versus pitfalls reference table in Part 4.

 

Who this guide is for. Expert Pigment modelers, solution architects and EPM professionals who want a definitive reference for building applications that remain fast as they scale, survive organisational change and earn the daily trust of the people who use them.

 

 


 

Performance in Action: Workforce and FP&A Patterns with Practical Reference

 

This part translates every principle into concrete modelling patterns across two case studies, closes with the human side of model performance, and provides a consolidated reference table.

 

The two case studies below are chosen because they stress different parts of the engine. The workforce model tests business-grain calculations and mapping patterns at the employee level. The FP&A model tests broad dimensional intersections, transaction-to-metric aggregation and version isolation. Both include quantitative cell count comparisons so the performance impact of each architectural choice is concrete.

 

Case Study 1: Optimising a Workforce Planning Model

 

Scenario: We are building a model to plan employee-related costs and headcount. We have a list of Employees (with properties including Department and Contract Type), a Time dimension (Month) and a Version dimension (Budget vs. Actual). We need monthly headcount and total salary cost by department.

 

The Naive Approach

One might create a "Headcount by Department" metric with structure [Employee, Department, Month, Version] where the formula checks if an employee is active and returns 1. Both Employee and Department are structural dimensions. Each employee belongs to only one department, so for any given employee, all other department combinations are irrelevant. With 1,000 employees and 10 departments, this creates 10,000 employee-department combinations per month per version, of which only 1,000 ever hold a value. Structural redundancy forces densification.

 

The Optimised Approach

 

Step 1: Calculate headcount at the employee level. Create "Employee Active Months" with structure [Employee, Month, Version]:

Employee Active Months = IF(EmploymentStatus = "Active", 1, BLANK)

No Department dimension. This metric produces a 1 for active employees and blank for inactive employees. With 800 of 1,000 employees active in a typical month, this metric is roughly 80% sparse.

 

Step 2: Aggregate to department when needed. Create "Headcount by Department" with structure [Department, Month, Version]:

Headcount by Department = Employee Active Months[BY SUM: Employee->Department]

Pigment processes only the active Employee records and aggregates them per Department. The resulting metric has at most 10 × 12 × 2 = 240 data points per year, compared to 240,000 possible cells in the naive approach. That is a 1,000x reduction in possible cells at the department level.

 

Step 3: Apply the same pattern to salary cost. "Monthly Salary Cost" at [Employee, Month, Version] stays blank for inactive employees. "Salary Cost by Department" aggregates:

Salary Cost by Department = Monthly Salary Cost[BY SUM: Employee->Department]

 

Results

The optimised design holds the heaviest metrics at the Employee grain, which is the correct and necessary grain for individual-level logic. The table below shows the cell count comparison:

 

Metric

Naive approach

Optimised approach

Employee Active Months

Not applicable

24,000 possible cells (~19,200 filled)

Headcount by Department

240,000 possible cells (90%+ blank)

240 possible cells (~240 filled)

Total cells managed

240,000

~24,240  (10x fewer)

 

Adding a new reporting slice, "Headcount by Contract Type", requires only reusing Employee Active Months with [BY SUM: Employee->ContractType]. No base calculations are touched and no sizes change.

 

Case Study 2: Optimising an FP&A Model (Budget vs. Actuals)

 

Scenario: An Account dimension (Revenue, Expenses, etc.), a Department dimension, a Time dimension (Month/Year) and a Version dimension (Actual, Budget, Latest Forecast). We need a forecast that applies growth rates to prior actuals, plus variance analysis between Budget and Actual.

 

Version Dimension and Calculation Isolation

Inefficient design: one metric handling all versions. Every change forces Pigment to evaluate the growth logic for every version and every month, including Actuals that are static:

Revenue = IF(Version = "Forecast",

  Revenue_Actual[LOOKUP: PY] * (1 + GrowthRate),

  Revenue_Actual)

 

Optimised design: separate the logic by version. This is the same pattern introduced in Part 2 Section 1.

  • Actual Revenue: pure storage, imported or input. No formula. Can be snapshotted when the period is closed. This metric never recalculates unless new actuals are imported.
  • Budget Revenue: Version=Budget inputs. Static once the budget cycle closes.
  • Forecast Revenue: formula-driven, scoped to accounts where growth is defined:

Forecast Revenue = IF(ISDEFINED(GrowthRate),

  Actual_Revenue[LOOKUP: PY] * (1 + GrowthRate),

  Actual_Revenue[LOOKUP: PY])

  • Consolidated Revenue (Nexus): combines the above for centralised financial reporting.

 

Transaction List Integration

FP&A models almost always include a GL transactions list. Create a staging metric that processes it once:

Data_GL_Amount = GL.Amount[BY SUM: GL.Account, GL.Department, GL.Month]

All planning metrics reference Data_GL_Amount as their starting point. The expensive scan of the raw GL list is done once. All downstream metrics are guaranteed to start from the same base numbers. Actuals data flows from this staging metric into the Actual Revenue metric.

 

Variance Calculation

Variance = IF(ISDEFINED(Actual_Revenue) AND ISDEFINED(Budget_Revenue),

  Actual_Revenue - Budget_Revenue, BLANK)

The ISDEFINED guard ensures variance is only computed where both values exist. Combinations where one is absent stay blank rather than showing a misleading zero.

 

Parallel Calculations for the Income Statement

Compute Total Revenue and Total Expenses in separate metrics. Pigment can calculate them simultaneously. Profit = Revenue minus Expenses is then a trivial subtraction. Both summary metrics can be calculated independently since they draw on different account subsets. If they were computed inside a single formula, execution would be sequential.

 

Results

The table below shows the cell count comparison for the core Revenue metric in a model with 500 accounts, 50 departments, 60 months and 3 versions:

 

Metric

Monolithic approach

Isolated approach

Revenue (all versions in one formula)

500 × 50 × 60 × 3 = 4.5M cells, formula evaluates for every version on every change

Not applicable: no monolithic metric

Actual Revenue (static)

Not isolated: changes to any version recalculate everything

500 × 50 × 60 = 1.5M cells, recalculates only on new actuals import

Forecast Revenue (formula-driven)

Mixed into the monolithic formula

Subset of 1.5M cells where GrowthRate is defined, recalculates only when a growth driver changes

 

When a CFO updates one growth assumption, only the Forecast Revenue metric recalculates, and only for the accounts where GrowthRate is defined. Actual Revenue and Budget Revenue are untouched. Adding a new version such as "Revised Forecast" means parameterising the Forecast metric, not reworking a monolithic formula.

 

User Adoption and Auditability

 

A model that is fast but not adopted is not performing. A model that cannot be understood or audited will degrade over time as messy fixes accumulate on top of logic no one is confident to change. A model that delivers value continuously and can be evolved without breaking is the actual goal.

 

Adoption as a Performance Metric

If planners actively use the model daily, it is meeting their needs. If they avoid it or maintain parallel Excel workbooks, the model is too slow, too hard to use or not trusted. Performance optimisations should not come at the cost of usability. With careful design, you can have both.

 

Build for the End-User

Create focused boards for each persona. Each user sees only the metrics relevant to them, loading a smaller set faster. One primary view per board for heavy input grids avoids loading too many metrics simultaneously. Pre-calculate results that need instant response. For rarely-used reports, calculate on demand. All of these choices reduce engine load and make the application feel snappier.

 

Auditability and Maintenance

A model built with clear structure, proper naming and modular logic is easier to audit. When something looks wrong or slow, you can trace how it is calculated. If the model is an opaque tangle, nobody is confident enough to change the complex parts, so layers get added on top of layers, making it slower and harder to understand. Break up complex formulas, add descriptions via text blocks and organise metrics in logical folders by theme.

 

Monitoring and Iteration

Monitor which boards and metrics users access frequently. If some parts of the model are never used, consider removing or simplifying them. There is no point paying a performance cost for something no one needs. Collect feedback from users on performance; they often notice a specific action is slow that was never part of a formal test. Use that feedback to refine the model continuously.

 

Flexibility for Change

Business requirements will change. A model with lean dimensions, modular calculations and clear separation of concerns can absorb those changes without a major rewrite. New logic can be added in the right place and old logic can be swapped out cleanly. This avoids quick fixes that degrade performance over time.

 

Summary: Best Practices vs. Common Pitfalls

 

Use this table as a checklist when reviewing or auditing Pigment models.

 

Aspect

Best Practice

Common Pitfall

Dimensional Architecture

Core dimensions only. Use properties and mapped dimensions for reporting attributes. Keep dimension lists trimmed to active items.

Extra dimensions "just in case". Years of inactive data in the Calendar. Derived dimensions (e.g. Year alongside Month) added structurally.

Mapping Strategy

Decide early: static relationship = property; time-varying relationship = mapping metric.

Static property for a time-varying relationship (wrong numbers). Full structural dimension when a property would suffice.

Sparsity

Leave cells blank unless a value must exist. ISDEFINED over ISBLANK. Filter zeros at the source.

IF(condition, value, 0) everywhere. Importing zero-value lines from source. Boolean metrics stored as TRUE/FALSE instead of TRUE/BLANK.

Formula Writing

Scope first, reduce first. Explicit dimensional alignment. Sparse-friendly functions (IFDEFINED, IFBLANK).

Compute everything, filter at the end. Implicit alignment producing silent large intermediate joins. PREVIOUS/CUMULATE used without isolation.

Calculation Structure

Calculate once, reference often. One staging metric per TL property. Split large calculations into parallelisable parts.

Monolithic formulas that do everything. Repeated sub-expressions across multiple metrics. Multiple raw TL scans from different metrics.

Transaction Lists

One staging metric per numeric TL property (BY SUM across all dims). Minimal computed TL properties. Filter zeros at import.

Scanning the raw TL repeatedly from multiple metrics. Heavy logic in TL properties. Zero-value lines imported without filtering.

Import Design

Filter nulls and zeros at import time. Import into staging metrics. Chunk large imports. Schedule heavy imports off-peak.

Importing full unfiltered datasets. Loading directly into planning metrics. Large imports during peak concurrent usage.

Views vs. Stored Metrics

Show Values As and Calculated Items for reporting-only KPIs. Metrics only when the result is referenced downstream.

New metric for every reporting request. Not knowing Show Values As results cannot be referenced by other formulas.

Snapshots and Versions

Snapshot completed periods. Lean Version dimension (active cycles only). Scenario for what-ifs only. Data Slices for comparisons.

Historical data kept live unnecessarily. Scenario used as a substitute for a Version dimension. Old snapshots accumulating without cleanup.

Testing and Scaling

Production-like data volumes. Concurrency testing. Timer on key user flows. Edge case testing.

Testing with minimal data. No concurrency testing. Discovering performance problems at go-live.

Performance Insights and Profiling

Check maximum size and density regularly. Profile immediately after test actions (3-day window). Repeat after each optimisation.

Only checking when users complain. Profiling days after the test action. Not verifying that an optimisation made a measurable difference.

Hygiene and Maintenance

Regular board and block hygiene reviews. Block Explorer plus dependency diagram. "Referenced" is not the same as "useful".

Test artefacts and abandoned chains accumulating. Unused metrics still calculating. No scheduled model health reviews.

User Experience and Adoption

Focused boards per persona. Pre-calculated critical results. Documented metrics. Continuous user feedback loop.

Usability sacrificed for model structure. Overloaded boards. Opaque calculations users do not trust, pushing them back to Excel.

 

The key is a consistent mindset: think like an optimizer at every step. Every design decision (a dimension added, a formula written, a data load configured) deserves the question: is this the leanest and fastest way? Can it be simplified without sacrificing correctness? That habit, applied consistently from the first model design session, produces applications that run fast, scale with the business and earn the daily trust of the people who plan with them.

Keep this guide as a working reference. Revisit it when a model starts to feel slow, when a new build is being scoped, or when a model review is due. The principles do not change with Pigment versions. The tools improve, but the architectural reasoning behind them stays constant.