Thursday, 20 November 2003

Software Development Productivity

Productivity is the prime determinant of our standard of living. If the revenue generated per work hour goes up, income levels rise. If productivity lags, wages also decline. When all else is equal in a market, the more productive company will enjoy greater profits. Thus the key to sustaining and increasing wages in the software development industry is year-to-year improvements in software development productivity.

Over the decade of the 1990’s, productivity increases in the technology sector resulted largely from increased capability of the underlying technology and intense customer demand for new technology. In recent years, however, software development productivity has stagnated as the demand for newer, faster technology has flattened. Thus it should come as no surprise that wages in the software development profession have flattened or declined. In order for the software development industry to see rising incomes, year-to-year increases in software development productivity must be generated.

As the technology sector matures, it can no longer depend on increasing growth in the underlying technology to fuel productivity increases. The time has come for serious efforts to increase productivity through more efficient use of labor and more effective value propositions for customers. This is how more mature economic sectors have been increasing productivity for decades.

But first we need to define what we mean by productivity improvement in the software development industry. Traditionally, we have measured productivity as thousand lines of code (kloc) per labor hour. However, the key process of a development activity is the transformation ideas of into products. To measure the real productivity of software development, we need look at how efficiently and effectively we turn ideas into software. So perhaps we should start with a new definition of software development productivity:
  • For companies that develop and sell software as a product, productivity may be defined as the revenue generated per employee.
  • For internal IT organizations, productivity may be defined as increased revenue realized by the supported business per dollar spent by the IT organization [1]
There are three basic approaches to productivity improvement:
  1. Reduce product costs by eliminating investments in product features that customers do not find valuable.
  2. Reduce indirect costs by streamlining processes and eliminating inefficiencies in development, delivery and support.
  3. Increase revenue by adding more value to a product so customers will pay more for it.
Let’s explore each of these approaches to improving software development productivity.
1. Reduce Software Development Effort.
About two-thirds of the features of a typical software system are seldom or never used. Only twenty percent of the features are used frequently. [2]  Eliminating extra features that no one really wants represents the single largest opportunity for increasing software development productivity in most organizations.
Extra features are generated by a software development process that attempts to nail down the features in a system at the beginning of the development process. Typically, customers are asked to decide at the start of a project what features they want. Often they have little incentive to keep the features list short, but they are penalized if the forget to include something. Can there be any wonder that a feature list generated with such incentives contains far more features than are really necessary?
The biggest opportunity for reducing software development effort is to limit overproduction of features by developing features on an as-needed basis. For many companies, this may require a paradigm shift in architecture and design. However, it is becoming increasingly apparent that there are many disciplined approaches to software development that provide for an emergent architecture. Key approaches include:
  • Automated test harnesses developed at the same time as the underlying code
  • Refactoring the code on an on-going basis to keep its design simple and clean
2. Streamline the Development Processes.
The measure of maturity of an organization is the speed at which it can reliably and repeatedly execute it’s key processes. A mature software development organization is one that can rapidly, reliably and reliably translate customer needs into deployed code. Too often, rapid software development has been equated with sloppy work, so attempts to streamline the development process are often looked upon with suspicion. However, in industry after industry, when sequential development processes are replaced with concurrent development processes, costs are slashed, quality is improved, and development time is dramatically reduced.
Organizations which record year-on-year productivity improvements spend a lot of time focusing on streamlining key processes while increasing their reliability. They do this by focusing on the flow of value through the process. There are three main techniques used to do this:
  • Value stream mapping is a tried and true approach to streamlining value-creating processes. A value stream map of the current state is invaluable for spotting waste; a value stream map of the desired future state is a roadmap for process improvement. Poppendieck.LLC helps organizations use value stream mapping to improve software development productivity.
  • Kaizen events are a typical implementation vehicle for streamlining operational processes. In software development, a modification of Kaizen events is usually required, because software improvement efforts usually take more time than is generally allocated for Kaizen events. We facilitate effective Kaizen events for software development.
  • An Integrated Product Team (IPT) is frequently used to facilitate information flow across an entire development team. Software development IPT’s involve not only architects, designers and developers, but also those responsible for deployment, operations, customer support, and maintenance.
3. Increase Customer Value.
Understanding customer value well enough to obtain additional revenue is the third key to increasing software development productivity. In general, customers cannot be relied upon to tell a software development organization how to increase the value of its offerings. In fact, customers generally think of software as a tool that should adapt to their needs over time. This makes understanding customer value elusive not only during a software development project, but even after deployment. And yet, the price that can be charged for software is directly related to understating how to increase its value proposition.
We are not likely to increase software’s value proposition unless we increase our understanding how customers might use our software to create value for themselves. There are three steps to increasing customer value:
  • Iterative development with frequent releases creates a short feedback loop between customers and developers. The key to successful iterative development is to prioritize the feature list, implement features in order of business value, and deploy them as soon as possible. The short feedback loop created by iterative development and early deployment not only limits the development of unnecessary code, it brings to light innovative new uses of technology that can significantly improve business results.
  • The next step to understanding customer value is to understand how our customers create value for their customers. Increasing the support of key value creating processes for customers is the most likely source of increased revenue for software development. One method of discovering how our customers create value is to focus on their key processes and map their value stream.
  • The final step to providing customer value is to link organizations through partnerships that focus on the overall productivity of the combined enterprise. Peter Drucker noted in Management Challenge for the 21st Century [3] that “In every single case…the integration into one management system of enterprises that are linked economically rather than controlled legally, has given a cost advantage of at least 25 percent and more often 30 percent.” The bottom line is that when organizations work together for their mutual benefit rather than optimizing the results of the individual organizations, a large increase in overall productivity can be realized.
Software development organizations have always emphasized process; however, the focus has been on predictable delivery of pre-defined scope rather than increased productivity. Not surprisingly, processes which did not value productivity did not deliver productivity increases; quite often they decreased productivity instead. We need to move from processes which define scope early to processes which allow only the most valuable scope to be addressed. We need to move from slow processes with many wasteful steps to streamlined processes which eliminate non-value-adding activities. We need to focus on increasing revenue potential over cutting costs by discovering how we can help our customers increase their productivity.
____________________
References and Notes
[1] Productivity for internal IT organizations may be also defined as increased revenue realized by the supported business per employee in the IT organization, but such a definition would ignore make-vs-buy decisions, not to mention outsourcing.

[2] Johnson, Jim, Chairman of The Standish Group, ‘ROI, It’s Your Job,’ Keynote: Third International Conference on Extreme Programming, Alghero, Italy, May, 26-29, 2002

[3] Management Challenge for the 21st Century, Peter Drucker, Harper Business, 2001, p 33.

Friday, 1 August 2003

Concurrent Development

When sheet metal is formed into a car body, a massive machine called a stamping machine presses the metal into shape. The stamping machine has a huge metal tool called a die which makes contact with the sheet metal and presses it into the shape of a fender or a door or a hood. Designing and cutting these dies to the proper shape accounts for half of the capital investment of a new car development program, and drives the critical path. If a mistake ruins a die, the entire development program suffers a huge set-back. So if there is one thing that automakers want to do right, it is the die design and cutting.

The problem is, as the car development progresses, engineers keep making changes to the car, and these find their way to the die design. No matter how hard the engineers try to freeze the design, they are not able to do so. In Detroit in the 1980’s the cost of changes to the design was 30 – 50% of the total die cost, while in Japan it was 10 – 20% of the total die cost. These numbers seem to indicate the Japanese companies must have been much better at preventing change after the die specs were released to the tool and die shop. But such was not the case.

The US strategy for making a die was to wait until the design specs were frozen, and then send the final design to the tool and die maker, which triggered the process of ordering the block of steel and cutting it. Any changes went through a arduous change approval process. It took about two years from ordering the steel to the time that die would be used in production. In Japan, however, the tool and die makers order up the steel blocks and start rough cutting at the same time the car design is starting. This is called concurrent development. How can it possibly work?

The die engineers in Japan are expected to know a lot about what a die for a front door panel will involve, and they are in constant communication with the body engineer. They anticipate the final solution and they are also skilled in techniques to make minor changes late in development, such as leaving more material where changes are likely. Most of the time die engineers are able to accommodate the engineering design as it evolves. In the rare case of a mistake, a new die can be cut much faster because the whole process is streamlined.

Japanese automakers do not freeze design points until late in the development process, allowing most changes occur while the window for change is still open. When compared to the early design freeze practices in the US in the 1980’s, Japanese die makers spent perhaps a third as much money on changes, and produced better die designs. Japanese dies tended to require fewer stamping cycles per part, creating significant production savings.

The significant difference in time-to-market and increasing market success of Japanese automakers prompted US automotive companies to adopt concurrent development practices in the 1990’s, and today the product development performance gap has narrowed significantly.

Concurrent Software Development
Programming is a lot like die cutting. The stakes are often high and mistakes can be costly, so sequential development, that is, establishing requirements before development begins, is commonly thought of as a way to protect against serious errors. The problem with sequential development is that it forces designers to take a depth-first rather than a breadth-first approach to design. Depth-first forces making low level dependant decisions before experiencing the consequences of the high level decisions. The most costly mistakes are made by forgetting to consider something important at the beginning. The easiest way to make such a big mistake is to drill down to detail too fast. Once you set down the detailed path, you can’t back up, and aren’t likely to realize that you should. When big mistakes can be made, it is best to survey the landscape and delay the detailed decisions.

Concurrent development of software usually takes the form of iterative development. It is the preferred approach when the stakes are high and the understanding of the problem is evolving. Concurrent development allows you to take a breadth-first approach and discover those big, costly problems before it’s too late. Moving from sequential development to concurrent development means starting programming the highest value features as soon as a high level conceptual design is determined, even while detailed requirements are being investigated. This may sound counterintuitive, but think of it as an exploratory approach which permits you to learn by trying a variety of options before you lock in on a direction that constrains implementation of less important features.

In addition to providing insurance against costly mistakes, concurrent development is the best way to deal with changing requirements, because not only are the big decisions deferred while you consider all the options, but the little decisions are deferred as well. When change is inevitable, concurrent development reduces delivery time and overall cost, while improving the performance of the final product.

If this sounds like magic – or hacking – it would be if nothing else changed. Just starting programming earlier, without the associated expertise and collaboration found in Japanese die cutting, is unlikely to lead to improved results. There are some critical skills that must be in place in order for concurrent development to work.

Under sequential development, US automakers considered die engineers to be quite remote from the automotive engineers, and so too, programmers in a sequential development process often have little contact with the customers and users who have requirements and the analysts who collect requirements. Concurrent development in die cutting required US automakers to make two critical changes – the die engineer needed the expertise to anticipate what the emerging design would need in the cut steel, and had to collaborate closely with the body engineer.

Similarly, concurrent software development requires developers with enough expertise in the domain to anticipate where the emerging design is likely to lead, and close collaboration with the customers and analysts who are designing how the system will solve the business problem at hand.

The Last Responsible Moment
Concurrent software development means starting developing when only partial requirements are known and developing in short iterations which provide the feedback that causes the system to emerge. Concurrent development makes it possible to delay commitment until the Last Responsible Moment, that is, the moment at which failing to make a decision eliminates an important alternative. If commitments are delayed beyond the Last Responsible Moment, then decisions are made by default, which is generally not a good approach to making decisions.

Procrastinating is not the same as making decisions at the Last Responsible Moment; in fact, delaying decisions is hard work. Here are some tactics for making decisions at the Last Responsible Moment:

  • Share partially complete design information. The notion that a design must be complete before it is released is the biggest enemy of concurrent development. Requiring complete information before releasing a design increases the length of the feedback loop in the design process and causes irreversible decisions to be made far sooner than necessary. Good design is a discovery process, done through short, repeated exploratory cycles.

  • Organize for direct, worker-to-worker collaboration.  Early release of incomplete information means that the design will be refined as development proceeds. This requires that upstream people who understand the details of what the system must do to provide value must communicate directly with downstream people who understand the details of how the code works.

  • Develop a sense of how to absorb changes.  In ‘Delaying Commitment,’ IEEE Software (1988), Harold Thimbleby observes that the difference between amateurs and experts is that experts know how to delay commitments and how to conceal their errors for as long as possible. Experts repair their errors before they cause problems. Amateurs try to get everything right the first time and so overload their problem solving capacity that they end up committing early to wrong decisions. Thimbleby recommends some tactics for delaying commitment in software development, which could be summarized as an endorsement of object-oriented design and component-based development:

  • Use Modules.  Information hiding, or more generally behavior hiding, is the foundation of object-oriented approaches. Delay commitment to the internal design of the module until the requirements of the clients on the interfaces stabilize. 

  • Use Interfaces. Separate interfaces from implementations. Clients should not de-pend on implementation decisions.

  • Use Parameters.Make magic numbers – constants that have meaning – into parameters. Make magic capabilities like databases and third party middleware into parameters. By passing capabilities into modules wrapped in simple interfaces, your dependence on specific implementations is eliminated and testing becomes much easier.

  • Use Abstractions.  Abstraction and commitment are inverse processes. Defer commitment to specific representations as long as the abstract will serve immediate design needs.

  • Avoid Sequential Programming.  Use declarative programming rather than procedural programming, trading off performance for flexibility. Define algorithms in a way that does not depend on a particular order of execution.

  • Beware of custom tool building.  Investment in frameworks and other tooling frequently requires committing too early to implementation details that end up adding needless complexity and seldom pay back. Frameworks should be extracted from a collection of successful implementations, not built on speculation.

Additional tactics for delaying commitment include:
  • Avoid Repetition. This is variously known as the Don’t Repeat Yourself (DRY) or Once And Only Once (OAOO) principle. If every capability is expressed in only one place in the code, there will be only one place to change when that capability needs to evolve and there will be no inconsistencies.

  • Separate Concerns.  Each module should have a single well defined responsibility. This means that a class will have only one reason to change. 

  • Encapsulate Variation. What is likely to change should be inside, the interfaces should be stable. Changes should not cascade to other modules. This strategy, of course, depends on a deep understanding of the domain to know which aspects will be stable and which variable. By application of appropriate patterns, it should be possible to extend the encapsulated behavior without modifying the code itself. 

  • Defer Implementation of Future Capabilities. Implement only the simplest code that will satisfy immediate needs rather than putting in capabilities you ‘know’ you will need in the future. You will know better in the future what you really need then and simple code will be easier to extend then if necessary.

  • Avoid extra features. If you defer adding features you ‘know’ you will need, then you certainly want to avoid adding extra features ‘just-in-case’ they are needed. Extra features add an extra burden of code to be tested and maintained, and understood by programmers and users alike. Extra features add complexity, not flexibility.

Much has been written on these delaying tactics, so they will not be covered in detail in this book.
  • Develop a sense of what is critically important in the domain.  Forgetting some critical feature of the system until too late is the fear which drives sequential development. If security, or response time, or fail safe operation are critically important in the domain, these issues need to be considered from the start; if they are ignored until too late, it will indeed be costly. However, the assumption that sequential development is the best way to discover these critical features is flawed. In practice, early commitments are more likely to overlook such critical elements than late commitments, because early commitments rapidly narrow the field of view.

  • Develop a sense of when decisions must be made.  You do not want to make decisions by default, or you have not delayed them. Certain architectural concepts such as usability design, layering and component packaging are best made early, so as to facilitate emergence in the rest of the design. A bias toward late commitment must not degenerate into a bias toward no commitment. You need to develop a keen sense of timing and a mechanism to cause decisions to be made when their time has come.

  • Develop a quick response capability. The slower you respond, the earlier you have to make decisions. Dell, for instance, can assemble computers in less than a week, so they can decide what to make less than a week before shipping. Most other computer manufacturers take a lot longer to assemble computers, so they have to decide what to make much sooner. If you can change your software quickly, you can wait to make a change until customers’ know what they want.

Cost Escalation
Software is different from most products in that software systems are expected to be upgraded on a regular basis. On the average, more than half of the development work that occurs on a software system occurs after it is first sold or placed into production. In addition to internal changes, software systems are subject to a changing environment – a new operating system, a change in the underlying database, a change in the client used by the GUI, a new application using the same database, etc. Most software is expected to change regularly over its lifetime, and in fact once upgrades are stopped, software is often nearing the end of its useful life. This presents us with a new category of waste, that is, waste caused by software that is difficult to change.

In 1987 Barry Boehm wrote, “Finding and fixing a software problem after delivery costs 100 times more than finding and fixing the problem in early design phases”. This observation became been the rational behind thorough up front requirements analysis and design, even though Boehm himself encouraged incremental development over “single-shot, full product development.” In 2001, Boehm noted that for small systems the escalation factor can be more like 5:1 than 100:1; and even on large systems, good architectural practices can significantly reduce the cost of change by confining features that are likely to change to small, well-encapsulated areas.

There used to be a similar, but more dramatic, cost escalation factor for product development. It was once estimated that a change after production began could cost 1000 times more than if the change had been made in the original design. The belief that the cost of change escalates as development proceeds contributed greatly to the standardizing the sequential development process in the US. No one seemed to recognize that the sequential process could actually be the cause of the high escalation ratio. However, as concurrent development replaced sequential development in the US in the 1990’s, the cost escalation discussion was forever altered. The discussion was no longer how much a change might cost later in development; the discussion centered on how to reduce the need for change through concurrent engineering.

Not all change is equal. There are a few basic architectural decisions that you need to get right at the beginning of development, because they fix the constraints of the system for its life. Examples of these may be choice of language, architectural layering decisions, or the choice to interact with an existing database also used by other applications. These kinds of decisions might have the 100:1 cost escalation ratio. Because these decisions are so crucial, you should focus on minimizing the number of these high stakes constraints. You also want to take a breadth-first approach to these high stakes decisions.

The bulk of the change in a system does not have to have a high cost escalation factor; it is the sequential approach that causes the cost of most changes to escalate exponentially as you move through development. Sequential development emphasizes getting all the decisions made as early as possible, so the cost of all changes is the same – very high. Concurrent design defers decisions as late as possible. This has four effects:
  • Reduces the number of high-stake constraints.

  • Gives a breadth-first approach to high-stakes decisions, making it more likely that they will be made correctly.

  • Defers the bulk of the decisions, significantly reducing the need for change.

  • Dramatically decreases the cost escalation factor for most changes.

A single cost escalation factor or curve is misleading. Instead of a chart showing a single trend for all changes, a more appropriate graph has at least two cost escalation curves, as show in Figure 3-1. The agile development objective is to move as many changes as possible from the top curve to the bottom curve.




Figure 3-1. Two Cost Escalation Curves
Returning for a moment to the Toyota die cutting example, the die engineer sees the conceptual design of the car and knows roughly the size of door panel is necessary. With that information, a big enough steel block can be ordered. If the concept of the car changes from a small, sporty car to a mid-size family car, the block of steel may be too small, and that would be a costly mistake. But the die engineer knows that once the overall concept is approved, it won’t change, so the steel can be safely ordered, long before the details of the door emerge. Concurrent design is a robust design process because the die adapts to whatever design emerges.

Lean software development delays freezing all design decisions as long as possible, because it is easier to change a decision that hasn’t been made. Lean software development emphasizes developing a robust, change-tolerant design, one that accepts the inevitability of change and structures the system so that it can be readily adapted to the most likely kinds of changes.

The main reason why software changes throughout its lifecycle is that the business process in which it is used evolves over time. Some domains evolve faster than others, and some domains may be essentially stable. It is not possible to build in flexibility to accommodate arbitrary changes cheaply. The idea is to build tolerance for change into the system along domain dimensions that are likely to change. Observing where the changes occur during iterative development gives a good indication of where the system is likely to need flexibility in the future. If changes of certain types are frequent during development, you can expect that these types of changes will not end when the product is released. The secret is to know enough about the domain to maintain flexibility, yet avoid making things any more complex than they must be.

If a system is developed by allowing the design to emerge through iterations, the design will be robust, adapting more readily to types of changes that occur during development. More importantly, the ability to adapt will be built-in to the system, so that as more changes occur after its release, they can be readily incorporated. On the other hand, if systems are built with a focus on getting everything right at the beginning in order to reduce the cost of later changes, their design is likely to be brittle and not accept changes readily. Worse, the chance of making a major mistake in the key structural decisions is increased with a depth-first, rather than a breadth-first approach.

Friday, 25 April 2003

Lean Software Development

I often hear developers say: Life would be so much easier if customers would just stop changing their minds. In fact, there is a very easy way to keep customers from changing their minds – give them what they ask for so fast that they don’t have the time to change their minds.

When you order something on-line, it’s usually shipped before you have time for second thoughts. Because it ships so fast, you can wait until you’re sure of what you want before you order. But once you place the order, you have very little time to change your mind.

The idea behind lean thinking is exactly this: Let customers delay their decisions about exactly what they want as long as possible, and when they ask for something, give it to them so fast they don’t have time to change their minds.

Sure, you say, this is fine for Amazon.com, but how does it relate to software development? The way to deliver things rapidly is to deliver them in small packages. The bigger the increment of functionality you try to deliver, the longer it takes to decide what is needed and then get it developed, tested, and deployed. Maintenance programmers have known this for years. When a piece of production code breaks, they find the cause, create a patch, test it rigorously, and release it to production – usually in the space of a few hours or days.

But, you say, development is different; we need to develop a large system, we need a design, and we can’t deploy the system piecemeal. Even large systems that have to be deployed all-at-once should be developed in small increments. True, design is needed, but there is scant proof that great designs are achieved by meticulously gathering detailed requirements and analyzing them to death. Great designs come from great designers, and great designers understand that designs emerge as they develop a growing understanding of the problem.

As Harold Thimbleby said in Delaying Commitment [1], “In many disciplines, the difference between amateurs and experts seems to be that experts know how to delay their commitments…. Amateurs, on the other hand, try to get things completely right the first time and often fail… in their anxious desire to avoid error, they make early commitments – often the wrong ones… In fact, the expert’s strategy of postponing firm decisions, discovering constraints, and then filling in the details is a standard heuristic to solve problems.”

Lean Thinking
Over the last two decades, Lean Thinking has created fundamental transformations in the way we organize manufacturing, logistics, and product development. For example, concurrent development has replaced sequential development in industries from airlines to automobiles, cutting product development time by perhaps a third and development cost in half, while improving the quality and timeliness of the product. There are those who think that rapid development is equivalent to shoddy work, but lean organizations have demonstrated quite the opposite. The measure of the maturity of an organization is the speed with which it can reliably and repeatedly respond to a customer request.

Yes, you heard that right. Maturity is not measured by the comprehensiveness of process documentation or the ability to make detailed plans and follow them. It is measured by operational excellence, and the key indicator of operational excellence is the speed with which the organization can reliably and repeatedly serve its customers. Thirty years ago, Frederick W. Smith, the founder of FedEx, envisioned an overnight delivery system the seemed ridiculous to most people; today even the post office offers overnight delivery. A decade ago, Toyota could develop a new car twice as fast as GM; today all automotive companies measure and brag about the speed with which they can bring a new car to market.

Let’s revisit those customers you wish would make up their minds and stick to their decisions. Ask yourself this question: Once they do make up their minds, how fast can you reliably and repeatedly deliver what they want? Perhaps the problem does not lie in customers who can’t make up their minds and keep changing their decisions. Perhaps the problem lies in asking them to decide well before they have the necessary information to make a decision, and in taking so long to deliver that their circumstances change.

Principles of Lean Software Development
There are seven principles of Lean Software Development, drawn from the seven principles of Lean Thinking. These principles are not cook-book recipes for software development, but guideposts for devising appropriate practices for your environment.[2] These lean principles have led to dramatic improvements in areas as diverse as military logistics, health care delivery, building construction, and product development. When wisely translated to your environment, they can change your basis of competition.

Eliminate Waste
All lean thinking starts with a re-examination of what waste is and an aggressive campaign to eliminate it. Quite simply, anything you do that does not add value from the customer perspective is waste. The seven wastes of software development are: 
  • Partially Done Work (the “inventory” of a development process)
  • Extra Processes (easy to find in documentation-centric development)
  • Extra Features (develop only what customers want right now)
  • Task Switching (everyone should do one thing at a time)
  • Waiting (for instructions, for information)
  • Handoffs (tons of tacit knowledge gets lost)
  • Defects (at least defects that are not quickly caught by a test)
All lean approaches focus on eliminating waste by looking at the flow of value from request to delivery. So if a customer wants something, what steps does that customer request have to go thorough to get delivered to the customer? How fast does that process flow? If a customer request waits in a queue for approval, a queue for design, a queue for development, a queue for testing, and a queue for deployment, work does not flow very fast. The idea is to create cells (or teams) of people chartered to take each request from cradle to grave, rapidly and without interruption. Then value flows.

Amplify Learning
The game of development is a learning game: hypothesize what might work, experiment to see if it works, learn from the results, do it again. People who design experiments know that greatest learning occurs when half of the experiments fail, because this exposes the boundary conditions. A development environment is not a place for slogans such as:
  • Plan The Work And Work The Plan
  • Do It Right The First Time
  • Eliminate Variability
The learning that comes from short feedback loops is critical to any process with inherent variation. The idea is not to eliminate variation, it is to adapt to variation through feedback. Your car’s cruise control adapts to hilly terrain by frequently measuring the difference desired speed and actual speed and adjusting the accelerator. Similarly, software development uses frequent iterations to measures the difference between what the software can do and what the customer wants, and makes adjustments accordingly.

A lean development environment focuses on increasing feedback, and thus learning. The primary way to do this in software development is with short, full-cycle iterations. Short means a week to a month. Full cycle means the iteration results in working software: tested, integrated, deployable code. There should be a bias to deploy each iteration into production, but when that is not possible, end users should simulate use of the software in a production-equivalent environment.

We have known for a long time that iterative (evolutionary) development is the best approach for software development. In 1987 the report of the Defense Science Board Task Force on Military Software noted: “Document-driven, specify-then-build approach lies at the heart of so many DoD software problems…. Evolutionary development is best technically, and it saves time and money.”[3]


Delay Commitment
Delaying commitment means keeping your options open as long as possible. The fundamental lean concept is to delay irreversible decisions until they can be made based on known events, rather than forecasts. Economic markets develop options as a way to deal with uncertainty. Farmers, for example, can buy an option on the future price of grain. If the price drops, they have protected their profits. If the price raises, they can ignore the option and sell at the higher price. Options let people delay decisions until they have more information.

There are a lot of ways to keep options open in software development. Here are a few:
  • Share partially complete design information.
  • Organize for direct, worker-to-worker collaboration.
  • Develop a sense of when decisions must be made.
  • Develop a sense of how to absorb changes.
  • Avoid Repetition
  • Separate Concerns
  • Encapsulate Variation
  • Defer Implementation of Future Capabilities
  • Commit to Refactoring
  • Use Automated Test Suites
Deliver Fast
To those who equate rapid software development with hacking, there seems to be no reason to deliver results fast, and every reason to be slow and careful. Similarly, when Just-in-Time concepts surfaced in Japan in the early 1980’s, most western manufacturers could not fathom why they made sense. Everybody knew that the way to make customers happy was to have plenty of product on the shelf, and the way to maximize profits was to build massive machines and keep them busy around the clock. It took a long time for people to realize that this conventional wisdom was wrong.

The goal is to let your customer take an options-based approach to making decisions, letting them delay their decisions a long as possible so they can make decisions based on the best possible information. Once your customers decide what they want, your goal should be to create that value just as fast as possible. This means no delay in deciding what requests to approve, no delay in staffing, immediate clarification of requirements, no time-consuming handoffs, no delay in testing, no delay for integration, no delay in deployment. In a mature software development organization, all of this happens in one smooth, rapid flow in response to a customer need.

Empower the Team
In a lean organization, things moves fast, so decisions about what to do have to be made by the people doing the work. Flow in a lean organization is based on local signaling and commitment amongst team members, not on management directives. The work team designs its own processes, makes its own commitments, gathers the information needed to reach its goals, and polices itself to meet its milestones.

Wait a minute, you say. Why would I want to be empowered? If I make the decisions, then I’ll get blamed when things go wrong. A team is not empowered unless it has the training, expertise, and leadership necessary to do the job at hand. But once those elements are in place, a working team is far better equipped to make decisions than those who are not involved in the day-to-day activities of developing software. It is true that decision-making carries greater responsibility, but it also brings significantly greater influence on the course of events and the success of the development effort.

Consider an emergency response unit such as firefighters. Members receive extensive training, both in the classroom and on the job. Exercises and experience imprint patterns of how to respond to difficult situations. When an emergency occurs, the team responds to the situation as it unfolds; there is little time to ask remote commanders what to do, nor would it make sense. The important thing is that the responding teams have the training, organization and support to assess the situation as they encounter it and make basically correct decisions on their own.

Similarly in software, the development team is in the best position to know how to respond to difficult problems and urgent requests. The best way to be sure that you get things right is to work directly with customers to understand their needs, collaborate with colleagues to figure out how to meet those needs, and frequently present the results to customers to be sure you are on the right track. Management’s job to supply the organization, training, expertise, leadership, and information so that you generally make the right decisions, make rapid course corrections as you learn, and end up with a successful outcome.

Build Integrity In
There are two kinds of integrity – perceived integrity and conceptual integrity. Software with perceived integrity delights the customer – it’s exactly what they want even though they didn’t know how to ask for it. Google comes to mind – when I use Google I imagine that the designers must have gotten inside my head when they added the spelling checker. I would never have known to ask for this feature, but these days I type most URLs into the Google toolbar because if I mistype, Google will set me straight.

The way to achieve perceived integrity is to have continuous, detailed information flow from the users, or user proxies, to the developers. This is often done by an architect or master designer who understands the user domain in detail and makes sure that the developers always have the real user needs in front of them as they make day-to-day design decisions. Note that the technical leader facilitates information flow and domain understanding, she or he is intimately involved in the day-to-day work of the developers, keeping the customer needs always in mind. However, it is the developers, not the leader, who make the detailed, day-to-day decisions and tradeoffs that shape the system.

Conceptual integrity means that all of the parts of a software system work together to achieve a smooth, well functioning whole. Software with conceptual integrity presents the user with a single metaphor of how a task is to be done. You don’t have one way of buying airline tickets if you are paying with cash and a totally different way if you are using frequent flier miles. You don’t use kilometers in one module and miles in another.

Conceptual integrity is achieved through continuous, detailed information flow between various technical people working on a system. There are no two ways about it, people have to talk to each other, early and often. There can be no throwing things over the wall, no lines between supplier, development team, support team, customer. Everyone should be involved in detailed discussions of the design as it develops, from the earliest days of the program.

For example, Boeing credits its rapid and successful development of the 777 to its ‘Working Together’ program, where customers, designers, suppliers, manufacturers, and support teams all met from the start to design the plane. Early on it was discovered that the fuel tank was beyond the reach of all existing refueling trucks, a mistake that was easily fixed. In a normal development process this expensive mistake would not have been discovered until someone tried to fuel the first plane.

There are those who believe that software integrity comes from a documentation-centric approach to development: define the requirements in detail and trace every bit of code back to those requirements. In fact, such an approach tends to interfere with the technical communication that is essential to integrity. Instead of a documentation-centric approach, use a test-centric approach to integrity. Test early, test often, test exhaustively, and make sure an automated test suite is delivered as part of the product.

See the Whole
When you look at them closely, most theories of how to manage software projects are based on a theory of disaggregation: break the whole into individual parts and optimize each one. Lean thinking suggests that optimizing individual parts almost always leads to sub-optimized overall system.

Optimizing the use of testing resources, for example, decreases the ability of the overall system to rapidly produce tested, working code. Measuring an individual’s ability to produce code without defects ignores the well-known fact that about 80% of defects are caused by the way the system works, and hence are management problems.

The best way to avoid sub-optimization and encourage collaboration is to make people accountable for what they can influence, not just what they can control. This means measuring performance one level higher than you would expect. Measure the team’s defect count, not that of individuals. Make testing as accountable for defect-free code as developers. To some it seems unfair to hold a team accountable for individual performance, but lean organizations have found that individuals are rarely able to change the system which influences their performance. However a team, working together and responsible for its own processes, can and will make dramatic improvements.

Conclusion
To keep customers from changing their minds, raise the maturity of your organization to the level where it can reliably deliver what customers want so fast that they have no time to change their minds. Focus on value, flow, and people, the rest will take care of itself
_________________
References
[1]Harold Thimbleby, "Delaying Commitment" IEEE Software, May 1988.

[2]The book Lean Software Development: An Agile Toolkit by Mary Poppendieck and Tom Poppendieck, Addison-Wesley Professional, 2003, provides twenty two tools for converting lean principles into agile software development practices.

[3]Craig Larman, “A History of Iterative and Incremental Development”, IEEE Computer, June 2003

Screen Beans Art, © A Bit Better Corporation

Monday, 6 January 2003

Measure Up

Getting measurements right can be devilishly difficult, but getting them wrong can be downright dangerous. If you look underneath most self-defeating behavior in organizations, you will often find a well-intentioned measurement which has gone wrong. Consider the rather innocent-sounding measurement of productivity, and it’s close cousin, utilization. One of the biggest impediments to adopting Just-in-Time manufacturing was the time-honored practice of trying to extract maximum productivity out of every machine. The inevitable result was that mounds of inventory collected to feed machines and additional piles of inventory stacked up at the output side of the machines. The long queues of material slowed everything down, as always queues do. Quality problems often took days to surface, and customer orders often took weeks to fill. Eventually manufacturing people learned that running machines for maximum productivity was a sub-optimizing practice, but it was a difficult lesson.

As software development organizations search for productivity on today’s tight economy, we see the same lesson being learned again. Consider the testing department which is expected to run at 100% utilization. Mounds of code tend to accumulate at the input side of the testing department, and piles completed tests stack up at the output side of the testing department. Many defects lurk in the mountain of code, and more are being created by developers who do not have immediate feedback on their work. When a testing department is expected to run at full utilization, the likely result will be an increased defect level, resulting in more work for the testing department.

Nucor Steel grew from a startup in 1968 into a $4 billion giant, attributing much of its success to an incentive pay system based on productivity. Productivity? How did Nucor keep their productivity measurement robust and honest throughout all of that growth? How did they avoid the sub-optimization so common most productivity measurements?

The secret is that Nucor measures productivity at a team level, not at an individual level. For example, a plant manager is not rewarded on the productivity of his or her plant, but on the productivity of all plants. The genius of Nucor’s productivity measurement is that it avoids sub-optimization by measuring results at one level higher than one would expect, thus encouraging knowledge sharing and system-wide optimization.

How can this be fair? How can plant managers be rewarded based on productivity of plants over which they have no control? The problem is, if we measure people solely on results over which they have full control, they have little incentive to collaborate beyond their own sphere of influence to optimize the overall business. While local measurements may seem fair to individuals, they are hardly fair to the organization as a whole.

Measure-UP, the practice of measuring results at the team rather than the individual level, keeps measurements honest and robust. The simple act of raising a measurement one level up from the level over which an individual has control changes its dynamic from a personal performance measurement to a system effectiveness indicator.

In the book “Measuring and Managing Performance in Organizations”, Dorset House 1996, Robert Austin discusses the dangers of performance measurements. The beauty of performance measurements is that “You get what you measure.” The problem with performance measurements is that “You get only what you measure, nothing else.” You tend to loose the things that you can’t measure: insight, collaboration, creativity, dedication to customer satisfaction.

Austin recommends aggregating individual performance measurements into higher level informational measures that hide individual results in favor of group results. As radical as this may sound, it is not unfamiliar. Edward Demming, the noted quality expert, insisted that most quality defects are not caused by individuals, but by management systems that make error-free performance all but impossible. Attributing defects to individuals does little to address the systemic causes of defects, and placing blame on individuals when the problem is systematic perpetuates the problem.

Software defect measurements are frequently attributed to individual developers, but the development environment often conspires against individual developers and makes it impossible to write defect-free code. Instead of charting errors by developer, a systematic effort to provide developers with immediate testing feedback, along with a root cause analysis of remaining defects, is much more effective at reducing the overall software defect rate.

By aggregating defect counts into an informational measurement, and hiding individual performance measurements, it becomes easier to address the root causes of defects. If an entire development team, testers and developers alike, feel responsible for the defect count, then testers will tend to become involved earlier and provide more timely and useful feedback to developers. Defects caused by code integration will become everyone’s problem, not just the unlucky person who wrote the last bit of code.

It flies in the face of conventional wisdom to suggest that the most effective way to avoid the pitfalls of measurements is to use measurements that are outside the personal control of the individual being measured. But conventional wisdom is misleading. Instead of making sure that people are measured within their span of control, it is more effective to measure people one level above their span of control. This is the best way to encourage teamwork, collaboration, and global, rather than local, optimization.

Screen Beans Art, © A Bit Better Corporation

Lessons from Planned Economies

Just as a market economy which relies on the collective actions of intelligent agents gives superior performance in a complex and changing economic environment, so too an agile project leadership approach which leverages the collective intelligence of a development team will give superior performance in a complex and changing business environment. However, conventional project management training focuses on using a plan as the program for action; it does not teach project leaders how to create a software development environment that fosters self-organization and learning. Since very few courses with such a focus are available today, this paper proposes a curriculum for agile software development leaders.

Planned Economies
In the middle of the 20th century, dozens of countries and millions of people believed that central planning was the best way to run their economies. Even today there are many people who can’t quite understand why market economies invariably outperform planned economies; it would seem that at least some of the planned economies should have flourished. After all, there are advantages to centralizing economic decisions: virtually full employment is possible; income can be distributed more equally; central coordination should be more efficient; directing resources into investment should spur growth. So why did planned economies fail?

There are two fundamental problems with planned economies: First, in a complex and changing economic system, it is impossible to plan for everything, so a lot of things fall between the cracks. For instance, planned economies usually suffer a shortage of spare parts, because no one plans for machines to break down. Secondary effects such as environmental impact are often ignored. Furthermore, planners do not have control of the purchase of goods, so they have to guess what consumers really want. Inaccurate forecasts are amplified by a long planning cycle, causing chronic shortages and surpluses.

The second problem with planned economies is diminished incentives for individuals. When compensation does not depend on contribution, there is little to gain from working hard. When incentives are tied to meeting targets, risk adverse managers focus on routine production to meet goals. The stronger that rewards are tied to meeting targets, the more disincentive there is for being creative or catching the things that fall between the cracks.

If we look at conventional software project management, we see practices similar to those used in planned economies, and we also see the similar results. Among projects over $3 million, less than 10% meet the conventional definition of success: on time, on budget, on scope. For projects over $6 million the number drops below 1% [1]. Interestingly, the underlying causes of failure of planned economies are the same things that cause failure in software projects, and further, the remedy is similar in both cases.

The difference between a planned and a market economy is rooted in two different management philosophies: management-as-planning/adhering and management-as-organizing/learning. Management-as-planning/adhering focuses on creating a plan that becomes a blueprint for action, then managing implementation by measuring adherence to the plan. Management-as-organizing/learning focuses on organizing work so that intelligent agents know what to do by looking at the work itself, and improve upon their implementation through a learning process.

The Planning/Adhering Model Of Project Management
Conventional wisdom holds that managing software projects is equivalent to meeting pre-planned cost, schedule and scope targets. The unquestioned dominance of cost, schedule and scope – often to the exclusion of less tangible factors such as usability or realization of purpose – draws heavily on the contract administration roots of project management. Therefore project management training and certification programs tend to focus on the management-as-planning/adherence philosophy. This philosophy has become entrenched because it seems to address two fears: a fear of scope-creep, and a fear that the cost of changes escalates significantly as development progresses.

However, management-as-planning/adherence leads to the same problems with software projects that planned economies suffered: in a complex and changing environment, it is virtually impossible for the plan to cover everything, and measuring adherence to the plan diminishes incentives for creativity and catching the things that fall between the cracks.

In the classic article ‘Managing by Whose Objectives?,’[4] Harry Levinson suggests that the biggest problem with management-by-objectives is that important intangibles which are not measurable fail to get addressed, because they are not in the action plan of any managers. Often these are secondary ‘hygiene’ factors, similar to environmental considerations in a planned economy.

In the book ‘Measuring and Managing Performance in Organizations’[5], Robert Austin makes the same point: over time, people will optimize what is measured and rewarded. Anything which is not part of the measurement plan will fade from importance. Austin points out that managers are often uncomfortable with the idea of not being able to measure everything, so they compensate through one of three techniques:
  • Standardization. By creating standards for each step in a development process, it is hoped that all steps in the project can be planned and measured, and nothing will be missed.

  • Specification. Specification involves constructing a detailed model of the product and/or process and planning every step in detail.

  • Subdivision, functional decomposition. The Work Breakdown Structure (WBS) is the classic example of attempts to decompose a project into steps so that all steps can be planned. 

Conventional project management practices have emphasized all of these techniques to help a project manager be certain that everything is covered in the project plan. However, just as in a planned economy, these techniques are insufficient to catch everything in all but the simplest of projects. In fact, drilling down to detail early in the project has the opposite effect – it tends to create blind spots, not resolve them. By taking a depth-first rather than a breadth-first approach to planning, mistakes and omissions become more likely,[6] and these tend to be more costly because of early investment in details. Thus an early drill-down approach tends to amplify, not reduce, the cost of change.

A management-as-planning/adherence approach also tends to amplify, not reduce, scope-creep. In many software development projects, a majority of the features are seldom or never used.[1] Part of the reason for this is that asking clients at the beginning of a project what features they want, and then preventing them from changing their minds later, creates strong incentives to increase the number of features requested, just in case they are needed. While limiting scope usually provides the best opportunity for reducing software development costs, fixing scope early and controlling it rigidly tends to expand, not reduce scope.

Just as in planned economies, management-as-planning/adhering tends to have unintended consequences that produce precisely the undesirable results that the plans were supposed to prevent. The problem lies not in the planning, which is very useful, but in using the plan as a roadmap for action and measuring performance against the plan.

The Organizing/Learning Model Of Project Management
Market economies deal with the problems of planned economies by depending upon collaborating intelligent agents to make decisions within an economic framework. In market economies, it is the job of the government to organize the economic framework with such things as anti-trust laws and social safety nets. Economic activity is conducted by intelligent agents who learn from experience what is needed and how to fill the needs.

Of course, the economies of individual countries dwarf most software projects, so we might look further to find examples of management-as-organizing/learning. We will explore two domains: manufacturing and product development.

Throughout most of the 20th century, mass production in the US focused on getting things done through central planning and control, reflecting the strong influence of Frederick Taylor’s Scientific Management. The climax came when computer systems made it possible to plan the exact movement of materials and work throughout a plant. Material Requirements Planning (MRP) systems were widely expected to increase manufacturing efficiency in the 1980’s, but in fact, most MRP systems were a failure at detailed production planning. They failed for the same reasons that planned economies failed: the systems could not readily adapt to slight variations in demand or productivity. Thus they created unworkable schedules, which had to be ignored, causing the systems to become ever more unrealistic.

As the centralized MRP planning systems were failing, Just-in-Time systems appeared as a counterpoint to Scientific Management. Just-in-Time forsakes central planning in favor of collaborating teams (intelligent agents). The environment is organized in such a way that the work itself and the neighboring teams signal what needs to be done; rather than a central plan. When problems occur, the root cause is sought out and eliminated, creating an environment in which intelligent agents continually improve the overall system. In almost all manufacturing environments, implementing Just-in-Time trumps any attempt to plan detailed production activities using a MRP system. These systems succeed for the same reason a market economy succeeds: intelligent agents are better at filling in the gaps and adapting to variation than a centrally planned system.

An argument can be made that manufacturing analogies are not appropriate for software development, because manufacturing is repetitive, while projects deal with unique situations. Because of this uniqueness, the argument goes, management as planning/ adhering is the only way to maintain control a design and development environment. A look at product development practices shows that the opposite is true: creating a detailed plan and measuring adherence to that plan is actually a rather ineffective approach a complex product development project.

In the late 1980’s Detroit was shocked to discover that a typical Japanese automotive company could develop a new car in 2/3’s the time for half the cost as a typical US automaker.[7] The difference was that product development in Japan used a concurrent development process, which allows for learning cycles during the design process as well as on-going communication and negotiation among intelligent agents as design proceeds.

Just as market economies routinely outperform planned economies, concurrent development routinely outperforms sequential development. Replacing sequential (plan-up-front) engineering with concurrent (plan-as-you-go) engineering has been credited with reducing product development time by 30-70%, engineering changes by 65-90%, and time to market by 20-90%, while improving quality by 200-600%, and productivity by 20-110%.[8]

Based on experience from other domains, management-as-organizing/learning would appear to have a better chance of resulting successful software projects than the prevailing management-as-planning/adhering approach. An examination of the Agile Manifesto shows that agile software development approaches favor the management-as-organizing/learning philosophy. (See Table 1.) Therefore, we can expect that in the long run, agile software development might significantly outperform traditional software development practices. In fact, evidence is mounting that agile approaches can be very effective. [9]

Table 1. Mapping Values from the Agile Manifesto to Management Philosophies

A Curriculum For Agile Software Project Leadership
Existing training for project management appears to be largely focused on the management-as-planning/adhering philosophy. Courses seem to be aimed at obtaining certification in an approach to project management developed for other domains, such as facilities construction or military procurement. Even courses aimed specifically at software development tend to focus on work breakdown and a front-end-loaded approach to managing scope. As we have seen, this is not a good match for concurrent development or agile software development.

It would seem that at a curriculum should be available for leaders of agile software development projects, given the dismal track record of current approaches and the potential of agile software development. Project managers who know how to develop work breakdown structures and measure earned value often wonder what to do with an agile software development project. Senior managers often wonder how to improve the skills of project leaders. Courses on management-as-organizing/learning are needed to fill this void, but there seem to be few project management courses with this focus. To help make agile project leadership training more widely available, this article outlines a possible curriculum for such courses.

Change the Name: Project Leadership
Since this is new territory, we may as well start with a new name, and move away from the administrative context of the word management. All projects, agile or otherwise, benefit from good leaders; that is, people who set direction, align people, and create a motivating environment. By using the term leadership we distinguish this course from one which focuses on the management tasks of planning, budgeting, staffing, tracking, and controlling.

Setting Direction
Planning is a good thing; the ultimate success of any project depends upon the people who implement a project understanding what constitutes success. Planning becomes brittle with it decomposes the problem too fast and moves too quickly to solutions. The best approach to early planning is to move up one notch and take a broader view, rather than decompose the problem and commit to detail too early.[6] A project leader starts by understanding the purpose of the project and keeping that purpose in front of the development team at all times.

Organizing Through Iterations
The idea of management-as-organizing/learning is to structure the work so that developers can figure out what to do from the work itself. The fundamental tool for doing this is short iterations which develop working software delivering business value. Project leaders need to know how to organize iteration planning meetings and how to structure the development environment and workday so that people know what to do when they come in to work every day, without being told.

This part of the curriculum must cover such concepts as story cards, backlog lists, daily meetings, pair programming, and information radiators. It should also stress the importance of organizing worker-to-worker collaboration between those who understand what the system must do to provide value and those who understand what the code does.

Concurrent Development
Strategies for concurrent development are an important tool for project leaders, especially in organizations which are used to sequential development. General strategies include:
  • sharing partially complete design information

  • communicating design constraints instead of proposed solutions

  • maintaining multiple options

  • avoiding extra features

  • developing a sense of how to absorb changes

  • developing a sense of what is critically important in the domain

  • developing a sense of when decisions must be made

  • delaying decisions until they must be made

  • developing a quick response capability

System Integrity
Project leaders must assure that the software developed under their guidance has integrity. This starts with assuring that the basic tools are in place for good software development: version control, a build process, automated testing, naming conventions, software standards, etc. Leaders must assure that the development team is guided by the true voice of the customer, so that the resulting system delivers value both initially and over time. They must assure that technical leadership establishes the basis of a sound architecture and an effective interface. They must make sure that comprehensive tests are used to provide immediate feedback, as well a framework so that refactoring can safely take place.

Leading Teams
People with experience in traditional project management who are about to lead agile software development projects might need some coaching in how to encourage a team make its own commitments, estimate its own work, and self-organize around iteration goals. Project leaders attend the daily meetings, listen to and solve team problems, serve as an intermediary between management and the team, secure necessary resources and technical expertise, resolve conflicts and keep everyone working together effectively; but they do not tell developers how to do their job. Project leaders coordinate the end-of iteration demonstration and the beginning of iteration planning, making sure that work is properly prioritized and all stakeholder interests are served.

Measurements
Feature lists or release plans, along with associated effort estimates, are often developed early in an agile project. The difference from traditional project management occurs when actual measurements begin to vary from these plans; for agile development, it is assumed that the plan is in error, and needs to be revised. By measuring the actual velocity or burndown rate, a far better picture of project health can be obtained than measuring variance from a guesstimate. Creating more accurate estimates becomes easier as developers gain experience in a domain and customers see working software. Leaders should learn how to combine reports of actual progress with increasingly accurate estimates into a tool for negotiating the scope of a project; this can be far more effective at limiting scope that the traditional method of fixing scope and controlling it with change approval mechanisms.

A useful technique for project leaders is to aggregate all measurements one level higher than normal.[5] This encourages collaboration and knowledge sharing because issues are called to the attention of a larger group of people. It also helps to avoid local optimization and the dangers of individual performance measurements. This technique is useful, for instance, for defect measurements.

Large Projects
Various techniques for synchronizing agile development across multiple teams are important for project managers to understand. Some of the techniques that might be covered are: divisible architectures, daily build and smoke test, spanning applications, loosely coupled teams that develop interfaces before modules.

Conclusion
We have often heard that the sequential, or waterfall, approach to software development projects is widely known to be ineffective, but it seems to be quite difficult to do things differently. We also heard that many, many features in most systems, possibly even a majority, are not needed or used; yet limiting scope to what is necessary seems to be an intractable problem. One solution to these problems can be found in concurrent engineering, which is widely used in product development as an alternative to sequential development. Concurrent engineering works more or less like a market economy: both depend on collaborating intelligent agents, both allow the end result emerge through communication, and both provide incentives for individuals to be creative and do everything necessary to achieve success.
_______________
References:
[1] Johnson, Jim, Chairman of The Standish Group, Keynote “ROI, It’s Your Job,” Third International Conference on Extreme Programming, Alghero, Italy, May, 26-29, 2002.

[2] Johnston, R B, and Brennan, M, “Planning or Organizing: the Implications of Theories of Activity for Management of Operations, Omega: The International Journal of Management Science, Vol 24 no. 4 pp. 367-384, Elsevier Science, 1996.

[3] Koskela1, Lauri, “On New Footnotes to Shingo”, 9th International Group for
Lean Construction Conference, Singapore, August, 6-8 2001.

[4] Levinson, Harry, “Management by Whose Objectives?” Harvard Business Review, Vol 81, no 1, January, 2003, Reprint of 1970 article.

[5] Austin, Robert D., Measuring and Managing Performance in Organizations, 1996, Dorset Publishing House

[6] Thimbleby, Harold, “Delaying Commitment”, IEEE Software, vol 5, no 3, May, 1988

[7] Clark, Kim B, Fujimoto, Takahiro, Product Development Performance; Strategy, Organization, and Management in the World Auto Industry, Harvard Business School Press, Boston, 1991.

[8] Thomas Group Inc., National Institute of Standards & Technology Institute for Defense Analyses, from Business Week, April 30, 1990, pp 111

[9] Weber Morales, Alexandra, “Extreme Quality”, Software Development, Volume 11, No 2, February 2003.

[10] Kotter, John P, “What Leaders Really Do,” Harvard Business Review, Vol 79, no 11, December, 2001, Reprint of 1990 article.

Screen Beans Art, © A Bit Better Corporation