This is the tenth episode examining Dan Vacanti’s book Actionable Agile Metrics for Predictability, An Introduction. Previously we discovered the following peculiarities about *Service Level Agreements* and *Classes of Service*:

- Different types of work receive different treatment or service.
- A Class of Service is a policy that determines the pulling sequence of committed work
- The decision about which Class of Service applies should be done only when a work item is first pulled.
- Even minor changes to pull policies can have huge impact on Cycle Time distributions.
- Policies induce self-inflicted variability!
- A FIFO queue is the most effective pull policy.
- If the nature of your process disallows FIFO queuing, you should strive to change the process to support FIFO pulling as much as possible.
- The best way to handle variability and yet maintain high predictability is to deliberately build excess capacity — that is, slack — into the process.
- There will be a strong incentive to expedite all items.
- Expedition will effectively stop all work on standard items.
- Classes of Service are really speculative guesses about business value.
- The highest-priority Class of Service will damage every other item put on hold!
- Things should be done as fast as possible, without interferences in their flow through the process.
- Classes of Service are considered as an institutionalized violation of Little’s Law.
- Once the process is predictable, chances are you won’t ever need Classes of Service.
- Expedition and preferential treatment is recognized as a source of conflict which undermines TameFlow’s Unity of Purpose.

Now we will discover Dan Vacanti’s ideas about *Forecasting*, the *Monte Carlo Method* and how to *Get Started with Flow Metrics and Analytics*.

## Chapter 14 *Forecasting*

Most businesses will want to know *“when it will be done.”* Traditionally, to answer that question, teams employ expert opinion estimation. Dan suggests that another way to answer the same question is to use a forecast. **Forecasts must be expressed as a date range and a probability.**

As highlighted earlier in the book, Dan stresses that * Little’s Law cannot be used to predict the future.* Little’s Law can only describe the past.

Little’s Law is valid only if all of it’s assumptions are guaranteed, and there is no way to know if and how any of those assumption might be violated in the future.

Furthermore, Little’s Law is expressed through averages, and an average is of little use without knowledge about the underlying distribution. Little’s Law is powerful because it does not need to know about such distribution. **Yet without a distribution, we won’t have probabilities to support our forecasts.**

Little’s Law might be used to make a quick qualitative forecast; but you should not expect it to work deterministically. * To get a meaningful forecast we need to know the data’s distribution.* It is worth investing to get that knowledge, typically by gathering data out of your own process.

When a distribution is available, you can make reliable forecasts, such as:

**Single Item Forecasts**: This is straightforward. You just need to look at the*Cycle Time*data, pick the percentile of the forecast, and use the corresponding range.**Linear Projection Forecasts**: In general, the more the assumptions of Little’s Law are warranted, the more reliable a linear projection will be. A linear projection should be drawn with a burn up chart, in relation to a backlog that needs to be delivered. (A linear projection should not be used with a*Cumulative Flow Diagram*, since CFDs represent the past, and not the future; and in general they should not show a backlog).

* Linear projections are useful, but there are a number of caveats.* The forecast is based on (past) averages, which will not account for variability in the backlog or variability in the throughput. Linear projections appear to give a deterministic forecast and not a date range with a probability. Linear projections might be particularly unreliable when there are zero WIP conditions and an S-Curve develops. They are also adversely affected by

*Classes of Service*or

*Flow Debt*of any kind.

* One easy way to determine if on-time delivery will be achieved is to use the percentile lines of a Scatterplot diagram.* You just need to consider the time interval equivalent to – say – the 85th percentile from the

*Cycle Time*distribution, which you can easily read off a Scatterplot diagram.

Then you subtract that time from the required delivery date. Any item started prior to the that time point will have 85% chance or more to be delivered on time. Any item started later will have less than 85% chance to be delivered on time

## Chapter 15 *Monte Carlo Method*

* A better way to make forecasts is to use the Monte Carlo Method.* Again, the more strictly the assumptions of Little’s Law are upheld, the more reliable the forecasts.

Unless the process is predictable, you cannot have confidence in the underlying data. Therefore, it pays off to balance the *Cumulative Flow Diagrams* (i.e. balance arrivals against departures), and eliminate all triangles in *Scatterplots* (i.e. eliminate any *Flow Debt*).

A key question is about *What Data to Use*. If data is not available, make sure to collect it (mine or measure). * Find out what the real data distribution is.* No theoretical distribution will beat the one you have in your own context.

Another important aspect is about *How Much Data* is necessary. * It turns out you do not need a lot of data to get started.* In total absence of data, you need very few samples to have an idea about the underlying distribution shape.

After only five random samples, you can already reliably find the median. You can assume the data is uniformly distributed (even knowing that it definitively is not), and with only a dozen samples you will have a 90% probability that the next samples will be between the minimum and maximum of that population, so you have a reasonable idea of the range of the data. In any case, you should then continue to collect data and actually use the measured distribution as soon as possible.

When applying the *Monte Carlo Mehtod* it is important to understand what assumptions are in effect.

There are assumptions (both explicit and implicit) in the model you are using and there are assumption in the tool used to execute the simulation. * The assumptions in the model and those in the tool have to match, and they in turn have to match what happens in the real world.* Unless this is warranted, the method will not work.

**Start tracking data manually, even if you are using a tool. Then validate that the tool works as you expect.**

Don’t trust tools blindly. Beware that many tool vendors might not collect or represent data correctly.

## Chapter 16 *Getting Started*

When it comes to getting started with flow metrics and analytics, Dan reiterates the advice given at the beginning of the book.

First: * Start by defining your process.* You have to clearly decide where the boundaries are, where work arrives into and departs from the process; and then count the work items as

*Work in Progress*(WIP).

Next: * Ensure that your polices do not violate the assumptions of Little’s Law.* If so, you must change the policies accordingly, because each violation will damage predictability. Remember to design or redesign your process to support predictability.

Then: * Become methodical about capturing data.* You need to be methodical whether you gather data manually or with a tool. All team members must agree on consistently capturing data, so that you can trust the data later when you need to analyze it, and base forecasts on it.

You must ensure that you track items properly, even when exceptions to normal flow occur, like: back-flows, skipping of steps, abandonments. You should describe all work items with further attributes, which you then can use for filtering and segmentation.

## Forcasting, Monte Carlo Method, and Getting Started in TameFlow

Like in most of the book, the concepts presented by Dan in these chapters are highly relevant to *TameFlow*. In *TameFlow* forecasts are expressed as a date range with a probability that are represented by the MMR Buffer.

*TameFlow* relies a lot on *Linear Projection Forecasts* in order to constantly compute and monitor the *Buffer Burn Rate* and derive actionable signals thereof.

The *Monte Carlo Method* based off *Cycle Time* distribution data can be used with *TameFlow* too.

Dan’s advise on how to get started are also good for getting started with *TameFlow*. Once the data is gathered, it can be used to do forecasts, to size and to position the MMR buffer, and to give a reference base-line for future improvement initiatives.

Links:

- Dan Vacanti’s book is Actionable Agile Metrics for Predictability, An Introduction.
- For more information about
*TameFlow*, read the book Hyper-Productive Knowledge-Work Performance, The TameFlow Approach and Its Application to Scrum and Kanban; or - Check the site https://tameflow.com.