Building a benchmarking and service-level agreement strategy is a great challenge for many program managers. This is especially true for first-generation programs where processes and procedures usually are being created out of whole cloth — and the organization itself is being created from chaos. The questions are many: Where do I start with my strategy? What metrics should I consider as part of my program performance management process? How do I keep my suppliers accountable and establish service-level agreements and key performance indicators that are attainable, yet at the same time create stretch goals that drive improvement and program growth?

For any other category of spend within your organization, these numbers may be simple to calculate and establish. When buying raw materials, you have a defined standard of purity and an established delivery date. When purchasing machinery, the delivery requirements are defined and functional standards articulated. Contingent labor is not that clean, and drawing a line from program performance to client success is muddy at best — impossible at worst.

Flawed approach. Often, program managers collect sample scorecards and pull out those metrics that seem to resonate with them. The reasoning: If Company A is in my industry and uses similar skillsets, then I should use a benchmarking strategy that mimics Company A. But this reasoning is flawed. Each organization will have a unique footprint and culture that defines how things get done. For example, take time-to-fill, an oft-used and seemingly straightforward KPI. But your companies may have different VMS configurations that affect the requisition process, or your culture may be more high-touch than Company A’s, with hiring managers requiring more hand-holding through the process as part of your long-term adoption strategy.

Program managers will approach their problems differently, and like a fingerprint, the solutions are unique to them alone. What separates the good ones from the mediocre is their ability to get inspiration from others, but adapt others’ ideas to their own unique needs. We always recommend that emerging programs start with their own data first when constructing acceptable benchmarks.

With this in mind, you need to work with your providers to create your ZAP, or zone of acceptable performance. The ZAP moves the narrative from establishing an arbitrary performance goal that may be based on incorrect or inaccurate assumptions and instead creates high- and low-performance targets. For example, instead of agreeing on an SLA of a two-week fill time, agree that time-to-fill will be a program KPI with a high- and low-performance target. As you gather data, work with your provider to create the ZAP by calculating the average, then use a one or one-half standard deviation from that average to create the high and low. Over time, that band will become more compressed and true performance targets will reveal themselves. Only then will you be able to establish targets that can make the most sense.

Like your mother always said, each of us is a unique creation and you shouldn’t compare yourself to others. By relying on your own data and your competent provider partners, you can create a strategy that can stand the test of time.