Bottom line up front: you can build a robust simulation tool by the Fast Excel Development Template with zero coding; in this article I explain how and share an example of inventory coverage simulation.
Simulation in supply chain and planning enables businesses to model, analyze, and optimize processes, helping to predict outcomes, manage uncertainties, and improve decision-making for inventory, production, and logistics.
Recently my excellent colleague at Production-scheduling.com and good friend Kien Leong shared how integrating of Python with Excel can unleash a number of interesting applications, including advanced inventory simulation. I warmly invite you to visit the content he prepared for you, it is worth your time indeed. Of course, going with Python, which is offering free and open source libraries for almost all computing applications, widens the horizon of what you can do, at the cost of mastering some Python coding.
But can we do some valuable simulation by the Fast Excel Development Template? Our Fast Excel Development Template offers a code free automation system based on Excel, allegedly your daily driver for data processing.
In fact, if you are a fellow member of our community, you know how to benefit from the automated coding tools offered by the Fast Excel Development Template and build a system with zero human written code. And also distributing the tools you build is very easy: you share the workbooks.

Let’s explore together how to do simulation with the Fast Excel Development Template, without writing any code.
In this article I teach you how to build a simple inventory simulation tool with the Fast Excel Development Template (download the template here); furthermore, as I’m used to, you can download the tool I describe in the article available for free:
Inventory Simulation in Excel Download
Let’s imagine a simple electronic device like the one pictured below together with some of its components. It is made out of the assembly of components of a single level bill of materials. The shop assembles 5 models out of purchased components with different lead times.

Of course, the daily demand quantity of each finished good is variable as well as the lead time of the components: the generation of these two figures is a crucial part of our simulation workflow.
We want to simulate a min-max replenishment policy for both the finished products and the components. In other words, when the simulated projected inventory level goes below the minimum days of coverage, a replenishment order is released to reach the maximum days of coverage taking into account the current inventory level itself, the already released orders not yet delivered, the minimum order quantity and the maximum coverage we want.
And for doing this, we want to simulate the demand of the finished products and the lead time variation of the components. We will use the demand history and lead time history to build the demand and lead time pseudo-random generators. I explain below the meaning of pseudo-randomness.
The use case
This simulator is inspired to a real case I faced of improvable inventory coverage policies.
We run the simulation over one year and we repeat it 10 times.
In our simple case we start with a min max coverage set respectively to 14 and 21 calendar days for the finished goods and to 7 and 14 for the components.
The assembly line, producing the finished goods is very flexible: we consider 1 single day lead time. On the other end, some in the components have a long lead time – up to 42 days – with variability.
Running a simulation, 10 iterations are enough, demonstrates that these settings are not working: they are generating a useless high inventory level for finished goods in the hope to better serving the customer demand and very frequent component shortages due not only to the lead time variation but also and foremost because of the total low coverage compared to the nominal lead time of the long lead time components. In fact, a max coverage of 21 (finished goods) + 14 (components) = 35 days do not cover the nominal lead time of some in the components. It is worth mentioning that in this excellent assembly flexibility condition, positioning buffers at finished goods level makes the inventory cost higher than increasing the components buffers for the same overall performance.
Here is the simulation chart of a finished good: note the average level at about 40.

And here is the simulation chart of a long leadtime component: observe the steady negative availability i.e. shortage

In order to address this original unfortunate set up, we can think to an important reduction of the finished goods min max, because of the already mentioned flexibility of the assembly line and a significant increase of the component min max based on the lead time history analysis, specifically I set min equal to the third quartile and max equal to the third quartile plus the inter quartile range – don’t worry if this sounds like black magic, I teach you about quartiles and inter quartile range below . Again, a 10 iteration simulation shows us the improvement: we observe limited component shortages while the reduced finished good inventory guarantees a good availability for the customers.
Here is the simulation chart of a finished good: note the significantly lower average inventory level.

Here is the simulation chart of a long lead time component: observe the close to none shortages i.e. negative availability once the inventory is sensing with the new min max policy.

The charts below give you a visual glance of the improvement from a financial point of view: you can see that long lead time component shortages are by far reduced and the total inventory value is reduced. For precision sake, the finished goods inventory value is theoretical: in case of component shortages, the assembly is not possible and this affects the projected inventory level.

Intrigued?
Let’s split the workflow in steps and go trough the details of the process.
Step one: historical data analysis and the Box and Whiskers plot
First, we analyze the historical datasets: we want to study the spread and the skewness of them and to remove the extreme figures, as they could lead us to bad simulation results.
The Box and Whiskers plot, also known as Box plot, is a robust and simple tool to study datasets; furthermore, Excel offers an handy chart for representing it.
The Box and Whiskers plot is useful for:
- identifying the spread and skewness of data
- highlighting outliers
- comparing distributions across multiple datasets
We need to sort our datasets, demand and lead time, in ascending order and to calculate their quartiles.
In statistics, quartiles are a type of percentile which divide the number of data points into four parts, or quarters, of more-or-less equal size. The data must be ordered from smallest to largest to compute quartiles; as such, quartiles are a form of order statistic. The three quartiles, resulting in four data divisions, are as follows:
- Q1,the first quartile, is defined as the 25th percentile where lowest 25% data is below this point. It is also known as the lower quartile.
- Q2, the second quartile is the median of a data set; thus 50% of the data lies below this point.
- Q3, the third quartile, is the 75th percentile where lowest 75% data is below this point. It is known as the upper quartile, as 75% of the data lies below this point.
Along with the minimum and maximum of the dataset, which are also quartiles, Q0 and Q4 respectively, the three quartiles described above provide a five-number summary of the data.
Also, the Inter Quartile Range (IQR) is a measure of statistical dispersion, defined as the difference between the third quartile (Q3) and the first quartile (Q1).
The IQR represents the range within which the central 50% of the data lies, providing insights into the dataset variability while excluding outliers.
About the outliers, we will use the IQR to exclude them: we will discard the values less than Q1 – 1.5 x IQR and greater than Q3 + 1.5 x IQR.

The picture above shows the Excel Box and Whiskers chart, here is a quick explanation of it. Here’s how it works.
The Box represents the Inter Quartile Range (IQR), which is the range between the first quartile (Q1) and the third quartile (Q3). This range contains the central 50% of the data. The line inside the box represents the median (Q2), which divides the dataset into two equal halves. Whiskers:
The Whiskers are the extend from the box to the smallest and largest data points within the range of Q1 – 1.5 × IQR (lower bound) and Q3 + 1.5 × IQR (upper bound). These whiskers show the spread of the data, excluding outliers.
The Outliers: the data points outside the whisker range are considered outliers and are typically represented as dots. Additional Features:
Our Excel Box and Whiskers plots also display the mean as an “X” marker to provide additional insight into the dataset’s central tendency.
How smart is this? We can clean our data from the exceptions, we have a few figure summary of our dataset and a visual representation of it.
Here below the Sales History and LT history box plots.


Step 2: frequency analysis and the histogram of demand and lead-time variation.
Now we perform the frequency analysis of the historical series after the outliers removal.
In other words, we count the repetitions of each value for each part number.
We then express the frequency as a percentage of the total occurrences.
The graphical representation of this is an histogram, as shown in the picture.

This representation shows us the distribution of our values
Now we use the histograms as a “probability distribution”, in other words we extract random values within the ones shown in the histograms respecting the percentage of each value as it was the probability of each value.
Here is the trick: we transform the percentage in a proportional number within 1 and 1000, e.g. 50% becomes 500 and 1,3% 13, then we run the function =randbetween(1,1000), which will generate an almost random number between 1 and 1000, then we match the result in a cumulated table that respects the repetition percentage of our cleaned datasets to pick the corresponding demand or lead time value for each item.
Consideration n. 1: the =randbetween(1, 1000) function generates pseudo-random numbers, meaning that they appear to be random, but actually they are generated by a starting number, or seed, and an algorithm. Nevertheless, we consider the Excel pseudo-randomness to be random enough for our use case.
Consideration n. 2: the most used method for generating the supply chain simulation data is to select a distribution e.g the normal distribution and parametrize it according to figures calculated from the data set e.g. average and standard deviation for the normal distribution. However selecting the correct distribution could be very hard or lead to meaningless results in case of wrong choice. The method I’m going to explain could miss values that never showed up in the historical series, but it is simple and does not require any statistics knowledge.
Step three: run the simulation
We have our demand for finished goods and lead time for components distributions and an algorithm to generate them: we are ready for the first simulation run.
We generate the daily demand for each in our finished goods for one year according to the distribution as explained above. Starting from the inventory on hand, we project the inventory level by applying the min-max logic. This is generating assembly orders, for which we will need components
By exploding the the flat BOM we can calculate the component demand. We apply again the min-max policy and generate the inventory projection, taking into account this time the component randomly generated lead time according to their “distribution” and the MOQs.
At the end of the simulation run we output the results as text files separately for finished goos and components.
To export the results of the simulation we use the OUT FEDT automation statement, an advanced export that allows to add a time stamp to the file name, with the maximum granularity of 1 second – this means that if more than one file per second is generated, only the last file exported in the same second will be available because the same second previously generated ones were overwritten.
Unless you computer is very powerful or your simulation extremely simple, the granularity of 1 second should be enough, let me know in the comments or drop me an email if you need to go to the microseconds in the timestamp.
Consideration:this step can be complicated as you wish e.g. adding a finite capacity scheduling for the finished goods, or simulate a different case with a multilevel BOM and DDMRP buffers allocated along it. In here we can benefit from the many how tos we at Production-scheduling.com developed over the time in 20+ years.
Step four: iterate and report on the simulation
Fourth, we repeat the third step a number of times.
To iterate the we use the NEXT FEDT automation statement, which makes the automation macro to write the code needed to the choose the next worksheet to be processed at runtime, enabling branching, looping and recursion.
Once the iteration are completed, we summarize in tables and charts the results for the finished products and the components.
If we want to compare different min-max policies, we modify the item.txt file and repeat the process.
The tool consolidates all the available simulation run text files, therefore different min-max parametrization can be compared at the same time.
The tool: SimInv_v2.xlsb
Now I walk you through the tool workflow.
Input
We have five input text files
- items.txt: the items with their code, description. minimum order quantity, nominal lead time, min and max coverage in calendar days, cost
- BOM.txt: the classical flat Bill Of Material: finished good, component and qty per
- OnHand.txt: the starting finished goods and component inventory levels
- SalesHistory.txt: the history of the finished good demand, it is the dataset used to calculate the distribution from which picking the random demand values
- LTHistory.txt: the history of the component lead time, it is the dataset used to calculate the distribution from which picking the random lead time values

Calculations
I already described the process above, the simulation runs the projection from two dates and it is repeated a number of times. The three figures are set in the PARA worksheet as follows:
- PARA_StartDate: 01/01/2025
- PARA_EndDate: 31/12/2025
- PARA_Iterations: 10
Here is the process flow diagram to give you a visual overview.

Here following there are some technical details.
The Demand and Lead Time distribution
Once cleaned from the exceptions, or if you like outliers, and calculated the percentage of each value we can use =randbetween(1, 1000) and the approximate search =match( …, …., 1) to get the correct result.
Let’s first multiplicate by 1000 the percentages and cumulate them starting from 1: the result is a column that when searched with =match(random(1, 1000), By1000 column, 1) extracts each value according to each percentage – see the table below: =match(… , …, 1) of our random generated number =randbetween(1, 1000), let’s say 503 as well as any in the 252 values from 472 to 723 included, about 25.2% then, on the By1000 will select the row showing 472, and therefore the value Day Qty 1

I used the very same method for both demand and lead time.
Power Query
I used Power Query for joining, or merging in PQ language, tables and to stack, or combine in PQ language, them, plus a few calculations more. I purposely avoided to do M language coding to stick on the zero coding approach.
If you are not familiar with Power Query, I warmly recommend reading this article of mine for some useful tips on Power Query and Fast Excel Development Template integration, in particular about the PARA parameters passed to Power Query, the background refreshing disabling and the integration within the Fast Excel Development Template Pivot Template and the Power Query loading data to Pivot Table option.
Looping and exporting / importing txt files
I used the automation NEXT statement to repeat the inventory projection simulation and the OUT statement to export the result of each simulation in a different txt file with a timestamp added to the name.
Yes, but why generating all those text files?
Good question indeed, it is possible to generate the simulations one after the other in the same table without looping and exporting data.
However, the way I chose makes:
- the simulation portion simpler, and therefore easier to develop, debug and maintain in general
- easier to adapt the formulas and spreadsheets we are used to for planning and scheduling (modularity)
- the system more resilient to large datasets: for example the components simulation of this simple case takes about 9000 rows each run; the stacking of the simulation run txt files is performed in Power Query without loading them in a worksheet and the results are summarized in Pivot Reports, the maximum number of rows of an Excel worksheet is no longer a limiting factor
Of course the txt files exporting and processing is negatively affecting the performances of the tool and the handling of large datasets increases the processing time too.
In general, the effective tools you can build by the Fast Excel Development Template and no coding are slower and more resource intensive when compared to Python coded applications. The perks are the easyness in building, improving and maintaining and not the performances.
If you are looking for performances or you need to handle very large simulations and you are ready to master some Python coding skills, check our content on Python-Excel integration out I already mentioned above.
Reports
This is one in the most report rich tools I have ever shared with the community.
Let’s list them:
- SalesBoxPlot: the box and whiskers figure summary of the Sales History dataset
- SalesBoxPlotChart: the box and whiskers chart of the Sales History dataset
- SalesStatistics: the value frequency table of the Sales History dataset
- SalesStatsChart: the histogram of the value frequency table of the Sales History dataset
- LTBoxPlot: the box and whiskers figure summary of the Lead Time History dataset
- LTBoxPlotChart: the box and whiskers chart of the Lead Time History dataset
- LTStatistics: the value frequency table of the Lead Time History dataset
- LTStatsChart: the histogram of the value frequency table of the Lead Time History dataset
- AssySimReport: Simulation results for finished goods as quantity table
- AssySimChart: Simulation results for components as quantity chart
- AssySimCostReport: Simulation results for finished goods as cost table
- AssySimCostChart: Simulation results for finished goods as cost chart
- CompoSimReport: Simulation results for components as quantity table
- CompoSimChart: Simulation results for components as quantity chart
- CompoSimCostReport: Simulation results for components as cost table
- CompoSimCostChart: Simulation results for components as cost chart
Performance considerations
This tool is definitively not hyper fast.
In fact, the performances are affected by:
- the loading of the input data
- the number of simulation runs
- the number of pseudo-random extracted
- the complexity of the model you are simulating, repeated for each simulation
- the exporting in txt file of each simulation results for finished good and for components, two files repeated each simulation
- once for each tool run, the importing and stacking of the txt files generated during the current run and left over from previous simulations
I did some testings on a 10 years old Lenovo ThinkPad E550 powered by CPU Intel(R) Core(TM) i7-5500U CPU @ 2.40GHz, 2394 Mhz, 2 core, 4 threads, 16 GB ram DDR3, SSD Sata, Windows 11 24H2 and Microsoft 365.
The 985500 rows of the component 100 runs simulation are handled smoothly
On this machine, 10 runs take about 7 minutes, 50 runs take 33 minutes; 100 runs takes 1 hour and 6 minutes.
On a laptop with Intel 11 generation Intel i5 CPU, 16 GB DDR4 ram, SSD m2 the processing time goes down significantly: 10 runs take about 3 minutes, 50 takes 16 minutes; 100 runs take 32 minutes.
Mind that a simulator is typically a designing and testing tool, not a daily driver, therefore the processing speed is not that important.
Furthermore, the number of iterations needed is variable for an informative simulation and the sweet spot balancing the information content and run time is frequently lower than you would expect.
As an example, below 5, 10, 20 and 100 iterations charts of our model for one long lead-time component: the 10 iterations chart is decently informative, adding iterations adds only marginal value.


To get many iterations not keeping Excel busy for a long time, you can run small repeated batches, e.g. 10 times 10 iterations for a total of 100 iterations: it takes more time than 100 iterations in one shot, but the resulting datasets are equivalent.
If you really need a huge number of iterations you can federate more laptops and consolidate the results in one report by changing the exported file prefix for each PC in the PARA worksheet, PARA_Prefix parameter. Note that you will need to regenerate the automation macro by clicking twice on the Switch Automation button in the Menu worksheet or by repeating twice the shortcut Ctrl + Shift + a on each PC.Also, some modifications will be needed to the simulation reports to avoid overlapping of simulations with the timestamp identical to the second from different machines – I didn’t test the federated simulation.
Tool download and conclusion
To download the tool fill the form below, you will receive a download link.
Inventory Simulation in Excel Download
Once downloaded, unzip the compressed zip file under C:.
You will have a folder named C:\P-S_SimInv with the Fast Excel Development Template I used for building the tool, the SimInv_v2.xlsb workbook, which is the tool, and a subfolder Data, containing the input txt files; in particular, items.txt is the input actually loaded while items_bad.txt and items_good.txt are backups with the bad min max policy and with the good min max policy respectively.
The simulation data are stored separately in the folder Archive/Assy for finished goods and in the folder Archive/Compo for the components: be aware that these are never automatically deleted and all stacked for the simulation reports. On one end you can compare different simulations, on the other if you don’t clean these folders you will bloat your reports with useless data.
You can change the working folder by changing the content of Menu!B10, but make yourself sure to rebuild the automation macro by clicking twice on the Switch Automation button in the Menu worksheet or by repeating twice the shortcut Ctrl + Shift + a.
We learned how to build a complete simulation tool with zero coding.
Now you can build your own specific simulator or adapt this one to your use case without coding.
And of course, get in touch with me by email for any doubt or concern (gabriele at production-dcheduling.com).
