A theme that has been running through my career since my Master's project has been the question of measuring complexity in modelling and simulation. When can one proclaim to have built a simulation model and when is one glorifying simple analysis?

In the Operations Research ecosystem the tendency is certainly to inflate. Salesmen, curriculum vitae authors, recruiters and consultancies across the spectrum are all motivated to embellish the work that they do and work that is done. Like any scientific individual I seek to slice through the static, inform myself as to who is doing extraordinary work, and to build myself a framework from which I can safely criticize the inflations of others.

I have been working on a set of rules for separating "models" into models, calculations and simulations. I feel like there is a gaping opportunity here for contribution from complexity, chaos, and other disciplines in Computer Science and Mathematics, but here's what I've put together thus far:

*Simulations are models, but not all models are simulations. Calculations are not models.*

**Models**

- A model is a simplified representation of a system.
- All models are wrong, but some models are useful

**Calculations**

- The result of a calculation can be expressed in a single equation using relatively basic mathematical notation.
- Where calculations contain an time element, values at different times can be determined in any order without referring to previous values.

**Simulations**

- A simulation is a calculation in which one parameter is the simulation clock that increments regularly or irregularly.
- The outcome of a simulation could not have been determined without the use of the clock.
- While an initial state is typically defined, an intermediate state at a given time should be difficult or impossible to determine without having run the simulation to that point.
- Almost any model that involves repeated samples of random numbers should be classified as a simulation.

Consider the following progression of "models" that output an expected total savings:

- Inputs: Expected total savings.
- Inputs: Annual savings by year, time-frame of analysis.
- Inputs: Annual savings per truck per year, number of trucks by year, time-frame of analysis.
- Inputs: Annual savings per truck per year, current number of customers, number of trucks per customer, annual increase in customers, time-frame of analysis
- Inputs: Annual savings per truck per year, current number of customers by geographical location, annual increase in customers by geographical location, routing algorithm to determine necessary trucks, time-frame of analysis.
- Inputs: Annual savings per truck per year, current number of customers by geographical location, distribution of possible growth in customers by geographical location, routing algorithm to determine necessary trucks, time-frame of analysis.

As you can see, complexity builds and eventually passes a threshold where we would accept it as a model. "Model" 4 is still little more than a back of the envelope calculation, but Model 5 takes a quantum leap in complexity with the introduction of the algorithm. Model 5 however I would still not classify s a simulation, because any year could be calculated without having calculated the others. Finally Model 6 introduces a stochastic variable (randomness) that compounds from one year to another and brings us to a proper simulation.

I've seen calculations masquerading as simulations models at a Fortune 500 company both internally and externally. While the result is the same: outcomes determined from data where validity is asserted by the author, I know that Operational Research practitioners reading this will appreciate my desire to classify. At the very least it will help us separate what the MBAs do with spreadsheets from our own work.

I welcome input from others on this topic, as I am only just developing my own theories.