top of page

What do you optimize for?

Updated: Feb 28, 2019

Part 1 of 4

In your approach to managing software and systems development work, that is.

There are several ways to plan and track such work, and which you choose says a lot about what your organisation's focus is, both in terms of what the organisation believes it does and what it wants to optimize for. Over the course of a few blog posts we'll look at three popular choices:

  • “Traditional” project management

  • Scrum and XP–like Agile processes

  • Kanban–like Agile processes

And then try to draw some synoptic conclusions.

Each approach has its place. Understanding the assumptions behind each, and what each affords, that is: what it makes easy and most obvious to do, can help us decide which to use in what scenarios.

Each has an extensive literature and we can only scratch the surface here, in what is my personal interpretation of these approaches.

Traditional Project Management

Firstly, what is a “project”? It's not true that any given lump of work is best, or even usefully, thought of as a project.

There are two very similar but interestingly different definitions of a “project”. The PMI PMBOK says that a project is:

a temporary endeavor[sic] undertaken to create a unique product, service, or result.

Whereas PRINCE2 says that a project is:

[…] a temporary organization[sic] that is created for the purpose of delivering one or more business products according to an agreed Business Case

The common thread here is that projects are temporary: they begin and end. By implication, and in fact, they also have budgets of time and money. The products or services which they create are not temporary, but are handed off to an on–going operations capability which is not part of the project. PMI says that the project outcome is unique, which I interpret to mean something like new, or novel. PRINCE2 is a bit more specific, in that there is a Business Case for the product to satisfy. The Business Case will likely talk about the budgets and the expect return on that investment to be realized by the product or service once it is in production. Strictly speaking, projects don't, in and of themselves, make money. They are an investment. At the end of the project, the time and treasure invested into it is sunk cost. The temporary organisation or endeavour is a cost centre, not a profit centre.

I think this leads directly to the question of what project management optimizes for: cost. Both directly in terms of organizing the work to take the least time by the fewest people, and indirectly by trying to manage risks down as far as possible. This optimization for cost is baked in to the primary artifact of traditional project management, as much a symbol for it as the lightbulb is a symbol for innovation, the Gantt Chart.

Mr Gantt invented his charts, although as we shall see, not exactly The Gantt Chart, for the purpose of planning and tracking the work done in machine shops attached to steel foundries. His techniques were soon extended into other lines of manufacturing as he and his boss, Frederick Taylor, pretty much invented “Management Consulting” and promoted their techniques widely. Gantt addressed production, and not project work as defined above. An order for some parts is secured, and the work done by a standing staff and given production apparatus to fulfill the order, which will typically be for multiples of a given design.

Production is about large multiples, all the same (within tolerances). Projects, really, are about change: changing the business, changing the client, changing the relationships between stakeholders.

Gantt described his goal for changing management like this:

Improving the system of management means the elimination of elements of chance or accident, and the accomplishment of all the ends desired in accordance with knowledge derived from a scientific investigation of everything down to the smallest detail of labour, for all misdirected effort is simply loss, and must be born either by the employer of employee. [emphasis in original]

— Gantt, H. Work, wages, and profits, The Engineering Magazine Co. 2nd Ed. 1919 p35

It's worth noting that this, written in 1919 but based on work begun in the late 19th century, is appealing to an idea of what it means to do a “scientific investigation” of something very different from we mean by that today. Fisher only published Statistical Methods for Research Workers in 1925 and introduced the idea of gathering data to potentially reject the null hypothesis in The Design of Experiments of 1935. What Gantt means by “scientific investigation” is this:

  • An analysis of the operation into its elements

  • A study of these elements separately

  • A synthesis, or putting together the results of out study

ibid, p258

Simply: atomism.

The goal of planning, for Gantt, was to find the quickest way to carry out each elementary operation and then the arrangement of those operations back into the whole which would be the quickest overall. This work, and work like it, laid the foundation for traditional management practices, although note again that Gantt is talking about batch processes, not projects. Taylor promoted the Time Study, and later the Galbreth's Motion Study came to be very influential. The model was always that the work was very well understood, and the open question was how to do it in the way that used, and wasted, the least time, effort, and ultimately, money.

It is assumed in traditional management that the people who do the work are not able to carryout this analysis, nor the synthesis. Experts in work, rather than in the operations themselves, will figure it out and provide detailed instructions. The workers' job then is to faithfully carry out their detailed instructions, as quickly and accurately as possible. And that's not a figure of speech, Gantt's aim was that the work is done, and planned to be done, literally as quickly as is physically possible. And, the worker is to track the actual time taken, vs that.

Figure 1 shows an alleged example of a instruction card for a machinist:

Instruction Card for Turning a Crank-Shaft, Bethlehem Steel Co., July 17 1901
Fig. 1 Instruction Card for Turning a Crank-Shaft, Bethlehem Steel Co., July 17 1901 —ibid, p264

I don't believe this is a real one, I think it was recreated for Gantt's book, but this is the kind of thing, certainly. In this case, it is suggested, the un–named lathe operator earned a bonus over the usual piece rate as the crankshaft was complete in two minutes less than the time the expert who wrote the card though it could be done in! There would be a process for feeding that back to the experts, and the next instruction card would have the target time reduced—not through the assumption, at least so Gantt says, that from now on the machinists will all work harder, but because a better way to do the work has been found. In Gantt's world, this means that everyone benefits: faster turn-around leads to more profit for the business and over–performance means greater reward for the worker, as line 14 indicates. Note also the claim that making this shaft, presumably in the old “unscientific” way, used to take 54 hours. Which seems like a lot.

This way of thinking about work is remarkably robust. Almost 120 years later we still see tickets in some software and systems development shops with similar levels of detail about what to do, and a similar level of fixation with time taken vs planned. To be fair to Gantt and his collaborators, their actual writing always emphasizes that workers who are revealed, by this sort of data gathering an analysis, to be below–average in production should be the recipients of help, support, and training to improve, and maybe should be re–assigned to work that is better suited to them.

Gantt didn't exactly invent the Gantt Charts that we know today, but he did produce similar charts to track the planned vs actual delivery of batches of parts against time, the actual vs expected production by a given worker, and so on. The modern Gantt Chart perhaps owes more to the “Harmonogram” first described by Karol Adamiecki in 1903, which reflects the dependencies between tasks, thus allowing critical path analysis, and tracks the progress against plan graphically.

The modern Gantt Chart looks like the examples below, each illustrating the general theme but also some particular features. They are reproduced at small size, because the details don't matter for our purposes. I thank the product developers who made them available. Both charts show an hierarchical Work Breakdown Structure, and for each leaf task some detail of when it is planned to start, to end, and (therefore) its expected duration. In both cases people, workers, or as they are often known in this context “resources”, are shown assigned to tasks. The plans cover a total duration of many weeks or months, but task duration is given in exact days, representing a claim to precision of the order of ±4%. I have seen duration given to two decimal places of days on plans covering years, a claimed precision of ±0.01%. This does not seem reasonable for software development. Especially not for activities scheduled far in the future. Although it might be for production activities that have a very strong idea of "standard work".

An example Gantt Chart
Fig. 2 Example Gantt Chart [Seavus Project Viewer]

Both example Gantt charts shown derived start/duration/end brackets for non–leaf tasks and finish–start dependencies between tasks. Figure 2 additionally shows the “critical path” of the project, highlighted in red. If any one of those tasks take longer, or otherwise finishes later, than planned, the whole project takes longer and finishes later. Which is assumed to be an absolutely bad thing.

An example Gantt Chart
Fig. 3 Example Gantt Chart [ConceptDraw Project]

Figure 3 shows also the percent complete of each task, including rolled–up intermediate non–leaf tasks, and also what proportion of each worker's time is allocated to each task. The tiny “Resource Usage View” in the tray below the main chart shows who, that is which “resource” is over–allocated, as highlighted in red. Most traditional PM tools can do that.

In the case of a Gantt Chart that shows a “% complete” its often unclear what this means. It can be the proportion expended thus far of the planned effort, in which case it can grown to much more than 100%, or it can be (as in the Harmonogram, as it happens) the proportion delivered thus far of the planned delivery. In the case of machine shop, this is easy to measure: the order was for 10, we have made 5. In the case of software and systems development, it is much harder to assess, which can result in “% complete” approaching 100% only asymptotically. Advanced users might appeal to ideas such as Earned Value to convert between measured input and some sort of expected idea of output.

Enabling Assumptions

So, what assumptions must we make in order to apply the Gantt Chart, and the kind of thinking it implies, to a stream of work?

  • We can create a valid Work Breakdown Structure (WBS)

What would invalidate a WBS? A WBS is invalid if the effort, budget, or other inputs required, of the subordinate items of a larger item do not sum exactly to the effort, budget or whatever, of the superordinate item. That is, the effort to deliver item 3.1.4 must be exactly the sum of the effort to deliver items,,, and so on. This means that WBS items cannot overlap. Each of them, at every level, must stand alone as a block of investment to deliver, in the case of a project, a benefit advertised in the Business Case.

A WBS is not necessarily invalid, but is at best highly questionable if it talks about activities rather than outcomes. In the original US DoD formulation, the items in a WBS are each an actual thing, to be sourced, found or made. In broader usage, they might be a delivered benefit or other kind of outcome, but they should not be actions, nor be activities. Watch out for activities disguised as things, where the WBS item names a thing but is understood to mean “the activities that will result in the thing being made”. Apart form anything else, it is hard to be sure that WBS elements that are activities conform to the rule above about investment. if the WBS elements are things, or outcomes, we are in with a much better chance.

The US DoD, in MIL-STD-881D, says explicitly to exclude the following from WBSs:

  1. requirements analysis

  2. test engineering

  3. acquisition phases

  4. rework and retesting

  5. quality initiatives, reform initiatives, other cost saving efforts

  6. warranty work

  7. elements borrowed from the org. chart

  8. tooling

These are all part of delivering the WBS items and should not be planned or tracked separately.

The next time you see a WBS, maybe embedded in a Gantt Chart or maybe elsewhere, consider whether or not it really is valid.

And even if valid, is it any good? In order for a WBS to be a useful basis for a plan, it also has to be roughly correct. When the work at hand is to, as it was in the early days of the technique, build a tangible apparatus to meet a fairly fixed requirement, once could imagine doing that. When the desired outcome depends on users interacting with a system to be built, it becomes a bit more tricky.

  • Managers can work out the dependencies between WBS items, far into the future

This is Gantt's atomistic thinking. A larger WBS item, if it is a thing or outcome, is delivered or achieved as soon as all of its parts are delivered or achieved. If WBS items are things, literally parts of a whole, its relatively easy to see what dependencies there are between them. Similarly for outcomes. If WBS items are activities…harder to say how they depend.

  • Managers can, and should, allocate workers (aka “resources”) to WBD items far into the future

Of course, the staff available may change, their skills and interests may change, or they may simply become fatigued or bored with the work. This assumption interacts with the one above to lead mangers to schedule as many tasks as possible in parallel, to pull in schedules.

  • Somebody can, with high precision, determine how much investment a WBS item will require, even if that investment will take place far into the future

This is perhaps the second strongest assumption, after the one immediately above. It takes us into the realm of estimation. Estimating is hard, but possible. The key is that an estimate is not a prediction, is not a commitment, is not even (in procurement terms) a quote, it is an estimate. It reflects uncertainty. Good estimates are very open and explicit about this. Good estimates come as a range, with narrative something like:

  • we don't see how the work can possibly physically be done in less than x,

  • most likely it will take around y,

  • we will be astonished and embarrassed and will probably have to stop and fundamentally re-plan everything if it takes longer than z.

where of course x < y < z and probably y < (x + z) / 2 as well. This is well summarized by modelling estimated duration as a random variable governed by a β-distribution, scaled so that its support is from x to z with its maximum marginal probability at y, an approach best known from the PERT approach.

Some Example PERT-style -distributions
Some Example PERT-style β-distributions

Any plan which is based on a belief that the actual, a, will be less than x is immediately doomed, but such plans do occur. Even aspiring to improvements that will make it turn out that a < x are doomed. Even aspiring to improvements that will make it turn out that a < y is wildly optimistic and very unwise. These approaches are sometimes known as a “stretch goal” and imagined to be motivating, much in the style of Gantt's bonus for over–performance. But writing software is not turning metal.

Notice that the model estimate is a claim made by a “we”. Good estimates are a group effort. Estimates made by one person are unlikely to be useful. Estimates made by one person who isn't even the one who will do the work are another indication that the plan is probably doomed, in the case of systems and development work. Again, compare with Gantt's instruction cards. Writing code is not like turning a shaft.

  • Workers can, with no loss of effectiveness or efficiency, work on multiple WBDS items in a month, maybe a week, maybe even a day

It's amazing how many Project Managers seem to think this way. Traditional project management, as manifested in Gantt Charts, often seeks to answer one of two questions, either: if we start now, given the capability we have, by how soon can we be done (aka “planning from the left”), or if we have to be done by some fixed day, how soon do we need to start, and with what capability (aka “planning from the right”). In both cases, reducing the elapsed time, and to second order, also the monetary investment, taken to complete the WBS items for the whole piece of work is the implied goal.

But, allocating a worker to two or more items concurrently guarantees that each item will be delivered later in real time that either of them could have been delivered if done serially. This is simple arithmetic, but you will still hear some Project Managers say that “we don't have time” for one item to be completed before the other is begun. In such cases I can only infer that they expect the poor “resource” assigned to the multiple items to do a large amount of uncompensated overtime to make up the difference.

Timelines showing two tasks taking longer when multi-tasked
Fig. 4 Multi-tasking expands timelines

And, worse yet, as illustrated in Figure 4, the inevitable inefficiencies involved in context switching between WBS items also guarantees that both items together will not only take longer but also cost more to deliver than both of them could have if done serially. This is also fairly obviously true, but you will hear some Project Managers say that “we can't afford” for one item to be completed before the other is begun. See comment above.

This is a very strong assumption, and very hard to meet because, well…tasks, people, and for that matter time itself, just don't work that way.

Many of these assumptions, apart from the one about multi–tasking, are sometimes met, even in software or systems development projects. For example, I once worked with a team who’s aim was to build interactive Flash applications which would emulate exactly, on a web page, the user interface of a given mobile phone handset. The use case was that a subscriber calls a mobile network callcentre with some issue and the callcentre agent would walk through a user journey on the phone with the subscriber, stepping through the same user journey on the Flash app, seeing what the subscriber should see. For a given version of the handset firmware, all user journeys on it could be mapped out completely—and were, as the handset manuals weren't always reliable. This was back in the days when there were more than two kinds of phone, even to a first approximation, so the team had done lots of these emulators, they understood the work very well. And, Flash had a very constrained set of things you could script into it. Most of the assumptions above could be met and that team managed its software development projects very effectively using Gantt Charts.


And what are the affordances, the “perceived action possibilities”, of this tool, the Gantt Chart?

One is to find, and then try to minimize, the Critical Path, the chain of WBS items which most strongly condition the shortest duration of the whole project. More subtle optimizations can be made through “resource levelling” in which allocation of resources to WBS items, and the parallelization of WBS items, is manipulated to both reduce the duration of the critical path but also maximize, and make consistent, the utilization of resources. Back in Gantt's day, that made sense as a goal, not that he used these terms, as the primary “resources” in question were highly capital–intensive machine tools, lathes, drill presses, metal shapers, that kind of thing. These need to be kept utilized in order to fund their capital outlay and depreciation.

So we can say in summary that what traditional project management is about, as reflected in its premier its planning and tracking artifact at least, delivering a well–understood thing quickly while keeping everyone involved a busy as possible. If you use these techniques, good questions to ask are:

  • Does the return on the investment in our work start to accrue long after we stop doing it?

  • How well do we understand what we need to do, how far out? And,

  • Is the biggest benefit to our customers and our business derived by keeping all the developers at least 100% utilized on planned activities?

The answers might point towards using traditional project management.

In Part 2 we will look in a similar way at Agile approaches to development.


Image Credits:

  • Seavus Project Viewer By Darkodazines , CC BY-SA 4.0,

  • Concept Draw AnnaKorlyakova CC BY-SA 4.0,

  • PERT Distributions David Vose CC BY-SA 4.0 , from Wikimedia Commons


Recent Posts

See All

Scrum + XP < Agile

Part 2 of 4 Although applied now to projects, the kinds of management tool described in Part 1 arose originally in production, that is, factory environments. Scrum, however, uses ideas drawn from Prod


bottom of page