Theory of Constraints and Software Engineering

Explore how the Theory of Constraints and Throughput Accounting can be used to make better software engineering management decisions.
20 minutes read

In this post we will introduce the Theory of Constraints (TOC) and start looking at how it can be applied to software engineering management. TOC is most well known for its so called “Five Focusing Steps”, and often that process is referred to when trying to identify and deal with bottlenecks in Kanban for Software. However TOC has many other tools one can resort to in order to improve software engineering management. In particular, we will examine how Throughput Accounting (TA) can be used to take important management decisions.

Origins of the Theory of Constraints

The Theory of Constraints (TOC) is a management method developed primarily by Dr. Eliyahu M. Goldratt during the last 30 years, and first exposed as a business novel, “The Goal,” in [GOLDRATT-1992]. More recently TOC has been described extensively in the “Theory of Constraints Handbook” by Cox and Schleier [COX-2010].

TOC originated from manufacturing, but has since been applied to engineering, project management, sales, accounting, marketing and other business processes. TOC is based on Systems Thinking, the Scientific Method and Logic.

The Scientific Method mentioned above is not to be confused with scientific management, which is a different management theory altogether. In fact, TOC has a strong appeal on people with mathematics, engineering or physics backgrounds, while it has not gained the same recognition from the people whom it should interest most, those with business, administration, management and accounting backgrounds.

Fundamental Concepts of the Theory of Constraints

TOC considers any business as a system transforming inputs into outputs. The inputs undergo a number of work steps and are transformed into outputs. The outputs are the products/services valued and paid for by the business’s customers.

The key tenet of TOC is that the system’s output capability is limited by one of the work steps, the system’s so called Constraint. The resource performing that work step, is the system’s Capacity Constrained Resource (CCR). Often a similarity is drawn with a chain: the chain (the system) is only as strong as its weakest link (the CCR).

Continuous Improvement and Theory of Constraints

TOC proposes a very simple Process Of Ongoing Improvement (POOGI), consisting in the Five Focusing Steps (5FS) as shown here:

The Five Focusing Steps
The Five Focusing Steps

The 5FS is what TOC is most well known about. When systematically applying the 5FS, and effect-cause-effect logic thinking, the areas touched upon wills pan the whole business organization. [SMITH-1999] sums up TOC in two words: focus and leverage, stating that: "TOC guides management toward where and how they should focus resources to leverage return on investment."

TOC’s prime principle is to focus on the weakest point of the organization, the constraint limiting throughput, and leverage all improvement efforts on the constraint.

Significance of Work in Process and Inventory

When work flows through a system, the placement of the CCR is (often) given away by work “piling up” in front of it: you find queues of Work in Process (WIP). Like JIT, Kanban, and other methods, WIP and inventory are considered negative. [NOREEN-1995] explains: "Excess inventories can increase cycle times, decrease due date performance, increase defect rates, increase operating expenses, reduce the ability to plan, and reduce sales and profits."

The recent success of Kanban for Software is a testimony of why limiting work in process is so successful. (And it is not coincidental than Kanban for Software has been proposed by David Anderson, who first introduced TOC for software too.) The whole idea is to identify and manage what constitutes the inventory of software development.

Inventory in Software Development

If TOC is to be applied to software development, a relevant question is: What is inventory in software development and how can it be measured?

[ANDERSON-2003] first tried to use TOC for managing software, and states inventory is defined “through measures of client-valued functionality. […] The ideas captured as functional requirements or a marketing feature list represent the raw material for the software production system. Requirements represent the ideas being transformed into executable code.”

Client-valued functionality can be expressed in different ways, depending on the software methodology used. A unit of inventory could be, for instance:

  • A Use Case in UDP [a.k.a. RUP].

  • A Story Point in XP (eXtreme Programming).

  • A Feature in FDD (Feature-Driven Development).

  • A Backlog Item in Scrum.

  • A Function Point in traditional SDLC structured methods.

In short, “A unit of inventory is an idea for a client-valued function described in a format suitable for use by software developers.” In simpler terms: Inventory is the to-do list of client-valued functionality - no matter how the to-do list is represented.

Note here that the stress is on “client-valued” functionality because TOC is much concerned with “business value.” Unlike Agile processes, TOC makes it mandatory to quantify such business value. The value is represented as estimated revenue during planning and development phases, or actual revenue when in production.

Throughput Accounting

Throughput Accounting (TA) is TOC’s approach to accounting. In [ANDERSON-2003] we also find the first application of TA to the field of Software Engineering. Anderson’s ideas have been further expanded by [RICKETTS-2007] who applies the approach to the professional, scientific, and technical services businesses, and therein to information technology and software engineering.

Unfortunately, TA has gained a bad reputation mostly within the circles of accounting professionals, mostly due to the fierce position taken by Goldratt against traditional accounting practices. Besides, the politics, the reputation is undeserved, as we will see. TA can truly be added to the arsenal of management tools, often with spectacular effects on the bottom line.

Originally, it was the need to manage business system in a more scientific way that led to the creation of TA. In fact, TA is defined by a few simple formulas, that might somehow remind of the equations one finds in physics.

TA is defined by the following arithmetic expressions:

  • Throughput: T = Revenue - Totally Variable Expenses

  • Net Profit: NP = Throughput - Operating Expense

  • Return on Investment: ROI = Net Profit / Investment

With only three variables to consider, TA becomes accessible to and usable by non-accounting professionals. Decision-making is simplified. [BRAGG-2007] remarks that to make the correct decision, you need a positive answer to one of these three questions:

  • Does it increase throughput?

  • Does it reduce operating expenses?

  • Does it increase the return on investment?

It really is as simple as this: with TA, positive business decision can be taken if the action considered increases T, decreases OE or increases ROI.

In particular, notice how ROI is determined only by three variables: Throughput T, Operating Expense OE and Investment I. Since TOC is concerned about producing the maximum result with the minimum effort, a leveraging priority is defined among those three variables. One should favor, in order:

  1. Increase in T

  2. Decrease in I

  3. Decrease in OE

Notice that this priority is contrary to what is customary in conventional cost accounting, which favors cost reduction above anything else. One key tenet of TOC is this: Reducing cost is limited, while increasing throughput is potentially unlimited. An excessive focus on cost reduction can jeopardize the company’s capability to deliver, and instead it will decrease throughput, with much more devastating consequences.

Throughput Accounting vs. Cost Accounting

The order of focusing in TA is the opposite than that which is assumed in traditional CA, where focus is on reducing cost, while any consideration about increasing throughput is left last. This is another technical reason why TA is at odds with the CA. CA presents fundamental problems, as described by [CORBETT-1998], because “cost accounting tries to minimize products’ costs. […] cost accounting is based on the assumption that the lower the cost of a product, the greater a company’s profit. As the product cost results from the products’ use of the company’s resources, one way of reducing the cost of a product is by reducing its process time on a resource.”

This kind of thinking leads companies to seek local optimizations (i.e. reducing processing time on all resources), only to be deluded to find the sum of the parts is less than the whole. TOC focuses optimization efforts only on the CCR, because TOC maintains a system-thinking view and aims at improving the economic performance of the business as a whole. With focus on the constraint, TOC and TA favor optimizing the performance of one resource at a time, and not of all resources all the time. The analogy of the chain illustrates this. The whole chain gets stronger only by strengthening the weakest link.

When focus is on throughput, you have to adopt a systems-thinking view, which is hindered by traditional CA. Minimizing overall resource utilization and unit product cost become the last resort, and not the first, as it is dictated by CA.

Cost Accounting is not for Management Decisions

Using CA to make management decisions is a mistake, because it is not it’s intended purpose. [SMITH-1999] explains that CA is “designed to satisfy generally accepted accounting principles (GAAP). Nowhere is it written […] that the same information used for GAAP must be used to make management decisions.” Smith further observes that conventional accounting already acknowledges you maximize profit by selling “the product with the highest contribution margin per unit of its scarce resource.”

In reality TA is not really new. Throughput Accounting just renders practical an abstract accounting idea.

Cost, or absorption accounting has the only purpose of “satisfying outside reporting requirements,” but it is not the right tool for management decisions. Therefore: use CA for external reporting requirements, but TA for making management decisions.

Throughput Accounting can be Reconciled with Cost Accounting

Traditional accounting theory recognizes the underlying principle of TA: “sell the product with the highest contribution margin per unit of scarce resource.” However, the principle becomes actionable only through the system-thinking approach of TOC . TA is just the “numbers tool” to make the principle actionable from a financial point of view. (Other tools of TOC contribute to more operational and logical aspects.)

A significant corollary follows: TA is part of generally accepted accounting theory, and it can be reconciled with the GAAP for customary external reporting needs.

The contra-position between TA and CA is more political than actual. The reconciliation between the two was certainly not helped by Goldratt’s unmitigated positions and statements. He was known for statements like: “Cost Accounting is productivity’s public enemy number one.”

The contra-position has been sustained even through technical arguments. In particular, because traditional CA provides artificial incentives to build inventories. This is the technical reason why TA has been badly accepted by traditional accounting professionals. CA values inventory as a positive asset, while the TA considers inventory as a negative liability indicating a weakness or even a dysfunction in the whole organization. WIP and Inventory are negative liabilities. They limit throughput and are indicative of deeper organizational and work-flow problems.

The duality is more artificial than real, and is also simple to reconcile. [SMITH-1999] explains: “Instead of trying to turn absorption cost accounting into relevant data, [there is] a much simpler solution. […] convert direct-costing information at the month end to absorption costing using a simple bridge. This approach is known by every CPA […] that deals with small manufacturing firms that supposedly have not been sophisticated enough […] to track overhead by product on a daily, hourly, or minute basis. The variance reporting that is generated at the end of every month, purported to control costs, is not useful or timely and actually puts departments into conflict with each other and the overall goal of maximizing throughput at a minimum cash outflow.” Smith also observes that this approach is “simple, effective, elegant, and inexpensive and requires no software investment.”

The important conclusion? There is no contradiction. Reconciling TA with CA is a simple accounting exercise any trained accountant can perform. The real challenge is often in making the accountants appreciate the overall operational benefit, and true bottom line impact of this approach. A lesser challenge is recognizing that TOC uses a different terminology for concepts which are instead well known in traditional accounting.

Now that we’ve seen that TA can co-exist with traditional CA, and that any dismissal of TA is mostly due to not understanding that it is already part of traditional CA, we can move on and see what this means for software engineering.

Throughput Accounting for Software Engineering

Software engineering is intangible, while the origins of TOC are from the very tangible manufacturing industries. To acknowledge these differences, [RICKETTS-2007] identifies a Throughput Accounting for Software Engineering (TAE )

(Note: Ricketts also defines a more extensive Throughput Accounting for Software Businesses covering sales and research in addition to software engineering proper, and also distinguishing between selling a software product and a software service. He further distinguishes between TA for software engineering and other intangible service businesses which are different because more labor based, and less automated and reusable. For this discussion, TA for software engineering alone is sufficient.)

TAE is defined as follows:

  • Throughput: TE is the rate of cash generated through delivery of working code into production […] It is computed as sales price minus direct costs, such as packaging, delivery, installation, training, support, and networking.

  • Investment: I is all money invested in software production systems plus money spent to obtain ideas for client-valued functionality. It thus includes software development tools and requirements gathering. […]

  • Operating Expense: OE is all money spent to produce working code from ideas. It is primarily direct labor of software engineers, but it also includes selling, general, and administrative costs.

The following performance measures are computed exactly the same way :

  • Net Profit: NP = TE - OE

  • Return on Investment: ROI = NP / I

A simple view is sufficient to draw meaningful conclusions: NP is the difference between the revenue generated and the implementation cost; ROI is the ratio between NP and requirements gathering costs (and considering the cost of the software production system already amortized). To increase NP and ROI, you must increase TE while decreasing I and OE.

[RICKETTS-2007] suggests this is done “by gathering requirements rapidly yet accurately, creating software that customers value, and eliminating waste, which is composed of requirements and functions that are discarded before software enters production.” So minimize time spent gathering requirements and planning, and minimize the scope of the project.

These principles, derived from TOC’s systems-thinking approach and supported by TA, are comparable to what is endorsed by Agile and Lean processes. On the contrary, the application of these principles to traditional software processes is in conflict with the need to identify “all possible” requirements upfront.

Example: Decrease Operating Expense by Avoiding Feature Creep

You can reduce the OE by discarding requirements before starting implementation work, as described in my contribution to [SMITE-2010], where I presented a value based technique for triaging requirements expressed as User Stories, and estimated in terms of Story Points (as proposed by [COHN-2005]).

Each Story Point was associated with a corresponding estimated revenue value, which was recomputed after eliminating unnecessary stories. In an agile setting with CA, the “production cost” of a story point is always the same; the only way to “increase value” is to lower the cost per story point, or increase velocity/productivity. With TA, instead, increasing T in terms of the story point’s economic value is different. The stories “worth less” are eliminated. After eliminating a single story, you reduced the OE (the effort to produce the software).

Note:

  • From an Agile perspective, you “maximized the amount of work not done” as stated in one of the twelve principles of the Agile Manifesto [BECK-2001].

  • From a Lean perspective, you applied the principle of eliminating waste.

By recomputing the value of the story point, the overall value of the project increases. Reducing scope, project duration and time to market decreases; therefore ROI not only increases, but gets realized much sooner.

Example: Decrease Investment and Operating Expense with Open-Source Software

Adopting Open-Source software is naturally a way to reduce I. You avoid developing all the equivalent functionality. The broader the Open-Source solution, the more you decrease I. The narrower the proprietary extensions you develop yourself, the smaller the OE. Choose your market so your proprietary development becomes as small as possible. Choose the Open-Source project to maximize coverage of your requirements, and conversely minimize what you have to implement yourself.

Example: Increase Throughput by Targeting the Long Tail

When it comes to choosing a market, instead of targeting broad markets, by focusing on a market niche (i.e. the “Long Tail”) you can increase the unit price of your sales. For instance, if you were in the CRM business, providing a solution targeted at certain professional categories, with some unique features (say complying to law requirement), you can ask for a higher price. Increasing sales price is a way to increase throughput.

Combine this with the previous example: By embracing an Open-Source software as a platform on top of which you can develop your own solutions, you are freeing resources to develop other value adding code. The custom code will add value appreciated by the market niche. With more resources, you can also have more frequent releases, which (if paid for) increase throughput.

The key point: when there is the chance to free resources, don’t take this as an opportunity to lower costs by reducing head count. Instead, use the freed engineering capabilities to develop further value adding functions or products. That’s the big difference that comes from focusing on throughput first, and on costs last.

Considerations on Combining the Examples

The strategy of embracing an Open-Source project decreases OE (just like outsourcing), because there is less work to be done in-house. At the same time (and unlike outsourcing), it also reduces I because you don’t need a requirements gathering phase to define the part of functionality delivered by the Open Source software (a part from the activity of evaluating the fitness for purpose of the Open-Source software), nor to invest in any additional equipment to support the development of the platform. Outsourcing the development of such functionality, rather than exploiting Open Source software, increases OE (but less than doing the development in-house).

By choosing to target a long tail niche market and defining the scope of functionality covered respectively by the Open-Source component and your proprietary development, you give the proprietary code a positive impact on ROI, according to TA. Choosing Open-Source allows to reduce I. Limiting the scope of the proprietary code reduces OE. The strategy to target a niche market, increases T because higher prices can be commanded.

Adopting Open-Source software is seen as I; with the very beneficial quality of being zero. Though there is a component of OE too, to cover the new kinds of activities that come with Open-Source Software. Using Open-Source Software is not a zero cost operation. Only the acquisition is zero cost. Consider the total cost of ownership, and at least the expenses for gaining the knowledge about how to use Open-Source Software. Often it also means assigning development resources for integration, extensions and maintenance of Open-Source Software. Time must be invested in tracking down problems in Open-Source Software, especially (as is often the case with smaller Open-Source Software solutions) when support is not available. The sum of all these additional OE is negligible compared to the effect of the I we are focusing on. Recall: I appears as the denominator in the ROI ratio, while OE is only a factor in the numerator difference. Finally consider that the work on the proprietary code is all covered by OE.

Software Production Metrics in Throughput Accounting

[RICKETTS-2007] suggests using the following metrics specifically for software production:

  • Production Quantity: Q = client valued functions delivered in working code

  • Inventory: V = ideas + functions in development + completed functions

  • Average Cost per Function: ACPF = OE/Q

The ACPF was originally presented by [ANDERSON-2003], who also defined the:

  • Average Investment per Function: AIPF = I/Q

Ricketts suggests "Q, V, and ACPF correspond to T, I, and OE, respectively."

However, I consider this imprecise, and propose to add an Average Throughput per Function ATPF defined as:

  • Average Throughput per Function: ATPF = T/Q

The ATPF metric, rather than Q, is more correctly mapped to T. Using just Q appears to consider cost (the implied assumption being that cost is proportional to quantity) as prevalent over throughput.

As illustrated in the example about decreasing operating expenses by avoiding feature creep, the ATPF metric has important practical applications. To be precise, in the case cited in that example, the Average Revenue per Story Point was used; but revenue per story point/function can be considered as a first approximation of ATPF. The throughput per story point just subtracts a constant factor, the totally variable costs, from the revenue figure. Triaging stories and eliminating those “worth less” increases both numbers, so the effect on the decision making is unaltered by using the simplified metrics.

Take advantage of these software specific production metrics derived from TA. In particular focus on increasing the ATPF, and on decreasing the AIPF. Consider the ACPF last.

Throughput Accounting’s Effects on Delivery

The different perspectives given by TA and CA with respect to software engineering, are described by [RICKETTS-2007] as follows: “TAE is a radical alternative to Cost Accounting (CA), which focuses more on OE than T or I. As each software engineering task is performed, CA uses time sheets to add the cost of those tasks to the recorded value of the software being produced. So the longer a project lasts and the more effort it consumes, the more CA values the software asset. This creates no financial inventive for early completion, even though the business value to the client of undelivered software tends to go down with time.”

In TOC, this is very different, because “TAE does not record value added. It simply records I at the beginning of the project and T at the end. TAE dispenses with time sheets because effort is a fixed cost captured in OE. Thus, the longer a project lasts and the more effort it consumes, the more TAE increases its OE and decreases its T. This creates incentive to complete projects on time or early because that maximizes NP.”

This line of reasoning gives financial support to Agile/Lean processes that promote early completion and frequent delivery on the basis of technical reasons (to better capture and live up to the client’s needs). TA creates incentives to complete projects on time or earlier for financial reasons. Record I at the start, and T at the end. All effort is represented as a fixed running cost by OE.

Throughput Accounting’s Effects on Other Common Processes

TAE also provides a different viewpoint than CA on some common circumstances in software engineering, described by [RICKETTS-2007]:

  • When turnover occurs, CA just measures the cost of replacing lost software engineers. TAE captures this cost in OE, but TAE also measures the loss of T on the constraint, which can be many times larger because it is output lost by the entire organization.

  • When hiring occurs, TAE sees no increase in T unless the hiring is for the constrained resource. That is, hiring a non constrained resource increases OE without increasing T. When outsourcing occurs, it decreases OE, but it may also shift which resource type is the constraint. If the constraint is outsourced, the decrease in OE may also decrease T.

  • When projects are constrained by schedule, budget, resources, and scope, each constraint needs a buffer to protect it from uncertainty, but buffers increase OE without increasing T. Formal methodologies and process maturity certifications attempt to reduce uncertainty and thereby reduce buffer sizes, but methodologies and certifications themselves increase OE.

Conclusions

The simplicity of the TOC logic reveals a general strategic direction, which can be verified and validated with the simple metrics of TA (T, OE and I). TOC gives strong arguments to pursue or not any strategy under consideration, even without estimating or calculating costs and revenues, but only through logic. TOC’s strength is in the simplicity of its approach to decision making, when supported by TA; it is an advantage because it comes to a decision in less time.

TA can be used to take management decisions on all business processes, including turnover, hiring, outsourcing, choice of methodology, and so on. The trick is simply that of relating any decision to T, OE and I, in order to make an informed and financially sound decision.

Published : July 27, 2012
Share this article at :
Twitter Facebook LinkedIn