How To Measure Variability in Demand…. And Why You Should Think Twice Before Using it For Sizing Inventory

Here is a story about an attempt to size inventory buffers, by measuring demand variability. It is based on a true story and there are lessons learned from real experience.

I’ll tell you the story in both words and data. I have compiled a dataset and will share it along with the Excel tool that I used to analyse it. The download link is further down below. It will make more sense if you read the introduction story first.

We are working with a midsize manufacturing company. The purpose of the project is to characterise demand and use it to size the inventory buffers. As such, it might be relevant to either make-to-stock or make-to-order manufacturing. And distribution businesses may find it helpful too.

The company, like so many others, has a customer lead-time that is shorter than their production lead-time. And much shorter than their lead-time for raw materials. They have limited forecasts (rated from “fair” for some products to “non-existent” for others). They know they cannot just feed the forecast into MRP and come up with a plan to build and buy. They need to size inventory according to the uncertainty of future demand.

First, we need to define units. There is a range in demand volume, from a few dozen units a month to many thousands. So, the answer to the question “How much inventory should we hold?” is not meaningfully answered in units of measure- in this case “pcs” and “cartons”. Is a thousand pieces a lot of inventory? That depends on demand. So, the first goal is to measure a day of demand is for every item. Call it the ADU (average daily usage), Day of Demand, Day of Supply. We will use Day of Demand.

The data is in a single table. It has just three columns: Item, Week and Qty. This will be refreshing for some of you who have waded through our bigger datasets!

It is one long table. This example has 20 items and three years of demand, summarised weekly. In the real example, there were many more products, but I have created 20 because that is enough to tell this story.

It is natural to want to see this data in a matrix. With items as rows and the weeks as columns. And we could build a pivot table to show it this way, if we wanted to.

I want to show you how to do these calculations in the table form because it is much more flexible and powerful that way. If we built a matrix, and then the number of items changed, or the number of weeks, then the structure would change too. This is an example of our principle to calculate in Normal Tables as part of the Fast Excel Development Method.

If you want to swap out this data and add your own, then the functions will still work. Even if you have 100 items and 5 years of history.

You will see that every row shows a week of demand for the first item, and then goes on to the next item. A total of 3150 rows for 20 product items and 157 weeks. If this was 300k rows for 2000 products, it would still work perfectly well. It might take a minute or so to calculate rather than 6 seconds. If we use the Fast Excel Development Method to calculate in tables then it is more flexible and much more powerful.

The complete dataset and workbook that I used to analyse this is here.

Using Row Numbers to Define a Range Within a Column

I want to introduce a technique that allows us to calculate values for each item in a table like this. We want to apply a formula to a sub-section of a column. This should match all of the records for the first item “FC1001”, from the first week to the last week.

Using our Fast Excel Method, place the formula in row 8. The automated macros in the template will apply this formula to the whole table of data. The first two formulas return the first and last row numbers of the range for each item. In the example here, item FC1001 starts on row 11 and continues to row 167. In the download, you will see that this has applied to all items.

We can use the results of these simple formulas to do all the calculations we need in the same normalized table of data.

The first is a simple sum across the whole date range. This is perhaps not that useful in practice, as we would not want to take an average across three years of data. I put it in to illustrate the concept.

The Double-Index Method to Select The Range

We have talked a lot about INDEX formula. How it is our favourite formula and that sort of thing. INDEX usually returns a single cell, based on a row (and sometimes column) number and a range. In this method, we use two INDEXes to select the start and end of a range.

The row numbers are the start and end of the range. (Note, you do not need to use the “@” mentioned here for this application.)

The example workbook goes on to show how you can apply a moving average period, 90 days, rather than the whole history. There are some calculations that recognise the fact that your data might not have dates that are exactly 90 days apart. There are some further calculations to handle that and it is better to view this inside the workbook. The sheet is called “WeeklyDemand”.

Calculating Variability In a Normalized Table

The calculation for Variability is using Coefficient of Variation, COV.


We use the same technique to calculate each element: The STDEV, Mean (AVERAGE formula) and then the COV is STDEV/MEAN.

Then, use a Pivot table to summarise the results and we can see the COV value for every item.

There is quite a range of values. We would expect it to be somewhere between zero and one. And the lowest is 0.17 and the highest 0.81.

So, there we go. We have a day of demand defined. We can calculate what one day of demand is in units. Then we set buffer sizing on groups of items, defined by a multiple of days of demand. A high variability item has, say, 60-90 days of inventory cover whilst a low variability item has 15-30 days.

Of course, we have to consider lead-time for replenishment of the item. But in this line of thinking, variability is one of the major factors in deciding how much stock we hold to buffer against uncertainty. In DDMRP, for example, one of the few factors in buffer zone sizing is a variability factor, which is defined as the Standard Deviation / Average Daily Usage.

Hold On. time for a Sense-check!

Before we go off on applying these measurements, it is worth doing a sense-check. And this is what we did in this example. Divide the items into categories- high, medium and low uncertainty. An item in the high uncertainty category would be in the top third when it came to unpredictable demand. Bring in planners who are familiar with the items. And before showing them the results, ask them to guess the category for a sample of items.

If this variability measurement is a good indicator for uncertainty then the guesses and it should agree with the guesses, most of the time. You wouldn’t expect it to match perfectly, of course, but it should pass a gut feel of whether it is actually showing what it is meant to show.

When we performed this sense-check, the results were not good. There was no correlation at all. About as much as a random selection. Oh no.

Analysis – Dive Into the Data

When we looked at each item, there was a good reason why each unexpected result returned the variability it did. For example, here is one item- FC1011. Medium volume, COV=0.7 which would make it high variability. And supposedly high uncertainty.

“Ah, well that might be because we sell a lot of that one in the summer. But that is not a big problem- happens that way same every year”


So, we have a seasonality pattern. That would also be variability. But not the kind of variability that is a very good marker for uncertainty. But that is not all.

And that one is being phased out. We sold a lot in 2020 but it is replaced by another item. The new items will have the opposite growth.

Another Planner

And so, it went on. We went through each item, listing the examples of predictable variability. In the end we came up with four main themes.

  • Seasonality. Two major seasonal trends: One had a peak in the summer months (June to August, this was a northern hemisphere company) and another group of products that sold well in the winter. There were also some minor trends that had dual peaks.
  • Product Lifecycle. Products being introduced, old ones phased out. Or steady mature demand throughout the entire history. The ramp-up was less accurate to predict than the phase-out. But generally, three groups that you could predict to a greater or lesser extent.
  • Campaigns and Promotions. The retail and distribution chain for this company often had promotions. This had a sizable impact on future demand. And often fairly predictable.
  • Monthly Variation. Some items were only ordered once per month or had a large hockey-stick effect at the end of the month. Measuring variability at the monthly level would have eliminated this, but that wasn’t applicable for other products.

In the example data, for simplicity, we have represented Seasonality and Product Lifecycle.

If you look at all items at once, you get this spaghetti.

Needless to say, there are too many items to make sense of it.

However, once we have sorted the items into the categories for product cycle and seasonality, these predictable patterns appear.

So too, we start to see something similar with Product Cycles.

This shows three main product cycle groups: Group 1 shows steady demand across the history; Group 2 products at the end of life and ramping down; Group 3 a fast ramp-up as new products are introduced.

Groups 2 and 3 for both Seasonality and Product Cycle have much more variability than Group 1.

And much of that variability is predictable – has more to do with expected cycles and less to do with random uncertainty that the inventory is intended to buffer.

Now, it is not true to say that the seasonality trend is perfectly predictable. Or the launch of a new product will hit all its sales targets and nothing over.

However, the point is that there are a number of variability effects mixed in to any demand data. And each of these effects has an element that is predictable. So, in many cases aggregate variability is a poor indicator for uncertainty because we are not considering the extend to which an effect can be certain. The effect of seasonality and product cycle in this example overwhelms the effect of random fluctuations in demand.

When we strip out the effects of Seasonality and Product Cycles, we can measure what’s left. And then that variability is much closer to something that will stand in for uncertainty. There are now five variability groups that represent what is left after seasonality and product cycles have been neutralised. Higher group means more variability. These groups were confirmed by the planners to be much more representative of the uncertainty in demand.

This final chart shows a scatter plot of these variability groups (Random Variability if you like) versus the original COV measurement.

In each Variability Group there is a wide range of COV values. There is almost no correlation, which shows that the majority of the variability being measured was due to the more predictable effects that have been stripped out.

There are two items in the lowest group that have the 5th and 7th highest COV. There are two items in the second-highest group that have the 4th and 5th lowest COV.

What Does all this mean?

If we are using COV as a major factor to size inventory, then it is possible that predictable variation effects are determining a large part of that measurement. Which means it is likely that inventory is going to buffer predictable demand changes in some items, and a lack of inventory that is needed to cover unpredictable demand in others. Back to the age old problem of not enough stock of some items and too much in others.

Hang on, you might say. It is all variation in demand. The inventory is working to buffer that, regardless of the cause and effect. Well, that only makes sense if you are not going to do anything about the more predictable effects. In practice, planners know full well the effects of season, product cycles, campaigns and promotions on demand. They won’t ignore that critical intelligence, but will tend to override the inventory targets. If the people who are planning have no confidence in the system, then they will ignore the system. This counts as much for “black box” MRP and planning systems as it does for inventory sizing spreadsheets.

So, what do we do instead?

The solution is not to replicate the process I describe here to get to the Random Variability groups. I tell this story to highlight the dangers of simply feeding a demand history into a variability measurement. This would be a somewhat convoluted process: take demand history, identify predictable patterns, allocate items to those groups and then strip the group pattern from the history and measure the remaining COV. So we can then feed that COV into a inventory sizing formula.

The task of understanding patterns and applying them to future demand has a name: Forecasting! If we are going to go to the trouble of controlling the predictable parts of demand variation, why not apply it directly to the planning process? A forecast will be more effective at managing response to somewhat predictable patterns. Naturally the forecast will get some things right and some things wrong. The inventory is there for when it gets things wrong. So measuring the forecast is also important. And perhaps a much more relevant measure of uncertainty.

To learn more about forecasting, I am running a webinar on 14 April 2022 – “How Forecasting Fits a Successful Process for Supply Chain Planning”

Learn more and register here.

Leave a Comment

Scroll to Top