The New Venture team had done an incredible job, and they knew it. Increment by increment they had built a new software product, and when the deadline came, everything that had to be operational was working flawlessly. The division vice president thanked everyone who had contributed to the effort at an afternoon celebration, and the team members congratulated each other as they relived some of the more harrowing moments of the last six months.
The next day the team’s Scrum Master was catching up on long-ignored e-mail when Dave, the development manager, called. “Say Sue,” he said, “Great job your team did! I’ve been waiting for the product launch before I bothered you with this, but the appraisal deadline is next week. I need your evaluation of each team member. And if you could, I’d like you to rank the team, you know, who contributed the most down to who contributed the least.”
Sue almost heard the air escaping as her world deflated. “I can’t do that,” she said. “Everyone pitched in 100%. We could not have done it otherwise.” “But Sue,” Dave said. “certainly there must have been a MVP. And a runner-up. And so on.” “No, not really,” Sue replied. “But what I can do is evaluate everyone’s contribution to the effort.”
Sue filled out an appraisal input form for each team member. She rated everyone’s performance, but found that she had to check the ‘far exceeded expectations’ box for each team member. After all, getting out the product on time was a spectacular feat, one that far exceeded everyone’s expectations.
Two days later Sue got a call from Janice in human resources. “Sue,” she said, “Great job your team did! And thanks for filling out all of those appraisal input forms. But really, you can’t give everyone a top rating. Your average rating should be ‘meets expectations’. You can only have one or two people who ‘far exceeded expectations’. Oh and by the way, since you didn’t rank the team members, would you please plan on coming to our ranking meeting next week? We are going to need your input on that. After all, at this company we pay for performance, and we need to evaluate everyone carefully so that our fairness cannot be questioned.”
Sue felt like a flat tire. In the past, when she had a particularly difficult problem she always consulted the team, and she decided to consult them once again. At 10:00 the next morning, the entire team listened as Sue explained her problem. The had always come up with a creative solutions to problems she presented before, and she could only hope they would be able to do it again. She thought she might convince them to elect an MVP or two, to help her put some variation into the evaluations.
Sue suspected that Dave and Janice might not approve of her approach, but she didn’t realize that when the team members heard her dilemma, they would deflate just as quickly as she had. The best they could do was insist that everyone had given 200% effort, they had all helped each other, and they had thought that every single person had done a truly outstanding job. They were not interested in electing a MVP, but they were willing to choose a LVP. It would the unnamed manager who was asking Sue to decide amongst them.
Now Sue really had a problem. She had no idea how to respond to Dave and Janice, and the New Venture team had turned angry and suspicious. Tomorrow they would have to start working together on the next release. How could something that was supposed to boost performance do such a through job of crushing the team’s spirit?
Deming’s View
Sue is not the only one who had trouble with merit pay evaluation and ranking systems. One of the greatest thought leaders of the 20th century, W Edwards Deming, wrote that un-measurable damage is created by ranking people, merit systems, and incentive pay. Deming believed that every business is a system and the performance of individuals is largely the result of the way the system operates. In his view, the system causes 80% of the problems in a business, and the system is management’s responsibility. He wrote that using exhortations and incentives to get individuals to solve management problems simply doesn’t work. Deming opposed ranking because it destroys pride in workmanship, and merit raises because they address the symptoms, rather than the causes, of problems.
It’s a bit difficult to take Deming at face value on this; after all, companies have been using merit pay systems for decades, and their use is increasing. Moreover Deming was mainly involved in manufacturing, so possibly his thinking does not apply directly to knowledge work like software development. Still, someone as wise as Deming is not to be ignored, so let’s take a deeper look into employee evaluation and reward systems, and explore what causes them to become dysfunctional.
Dysfunction #1: Competition
As the New Venture team instinctively realized, evaluation systems which rank people for purposes of merit raises pit individual employees against each other and strongly discourage collaboration. Even when the rankings are not made public, the fact that they happen does not remain a secret. Sometimes ranking systems are used as a basis for dismissing the lowest performers, making them even more threatening. When team members are in competition with each other for their livelihood, teamwork quickly evaporates.
Competition between teams, rather than individuals, may seem like a good idea, but it can be equally damaging. Once I worked in a division in which there were two separate teams developing software products that were targeting similar markets. The members of the team which attracted the largest market share were likely have more secure jobs and enhanced career opportunities. So each team expanded the capability of its product to attract a broader market. The teams ended up competing fiercely with each other for the same customer base as well as for division resources. In the end, both products failed. A single product would have had a much better chance at success.
Dysfunction #2: The Perception of Unfairness
There is no greater de-motivator than a reward system which is perceived to be unfair. It doesn’t matter if the system is fair or not, if there is a perception of unfairness, then those who think that they have been treated unfairly will rapidly loose their motivation.
People perceive unfairness when they miss out on rewards they think they should have shared. What if the vice president had given Sue a big reward? Even if Sue had had acknowledged the hard work of the team, they would probably have felt that she was profiting at their expense. You can be sure that Sue would have had a difficult time generating enthusiasm for work on the next release, even if the evaluation issues had not surfaced.
Here’s another scenario: What would have happened if the New Venture team was asked out to dinner with the VP and each member given a good sized bonus? The next day the operations people who worked late nights and weekends to help get the product out on time would have found out and felt cheated. The developers who took over maintenance tasks so their colleagues could work full time on the product would have also felt slighted. Other teams might have felt that they could have been equally successful, except that they got assigned to the wrong product.
Dysfunction #3: The Perception of Impossibility
The New Venture Team met their deadline by following the Scrum practice of releasing a high quality product containing only the highest priority functionality. But let’s try a different scenario: Let’s assume that the team was given a non-negotiable list of features that had to be done by a non-negotiable deadline, and let’s further speculate that the team was 100% positive that the deadline was impossible. (Remember this is hypothetical; surely this would never happen in real life.) Finally, let’s pretend that the team was promised a big bonus if they met the deadline.
There are two things that could happen in this scenario. Financial incentives are powerful motivators, so there is a chance that the team might have found a way to do the impossible. However, the more likely case is that the promise of a bonus that was impossible to achieve would make the team cynical, and the team would be even less motivated to meet the deadline than before the incentive was offered. When people find management exhorting them to do what is clearly impossible rather than helping to make the task possible, they are likely to be insulted by the offer of a reward and give up without half trying.
Dysfunction #4: Sub-Optimization
I recently heard of a business owner who offered testers $5 for every defect they could find in a product about to go into beta release. She thought this would encourage the testers to work harder, but the result was quite different. The good working relationship between developers and testers deteriorated as testers lost their incentive to help developers quickly find and fix defects before they propagated into multiple problems. After all, the more problems the testers found, the more money they made.
When we optimize a part of a chain, we invariably sub-optimize overall performance. One of the most obvious examples of sub-optimization is the separation of software development from support and maintenance. If developers are rewarded for meeting a schedule even if they deliver brittle code without automated test suites or an installation process, then support and maintenance of the system will cost far more than was saved during development.
Dysfunction #5: Destroying Intrinsic Motivation
There are two approaches to giving children allowances. Theory A says that children should earn their allowances, so money is exchanged for work. Theory B says that children should contribute to the household without being paid, so allowances are not considered exchange for work. I know one father who was raised with Theory B, but switched to Theory A for his children. He put a price on each job and paid the children weekly for the jobs they had done. This worked for a while, but then the kids discovered that they could choose amongst the jobs and avoid doing the ones they disliked. When the children were old enough to earn a paycheck, they stopped doing household chores altogether, and the father found himself mowing the lawn along side the teenage children of his neighbors. Were he to do it again, this father says he would not tie allowance to work.
In the same way, once employees get used to receiving financial rewards for meeting goals, they begin to work for the rewards, not the intrinsic motivation that comes from doing a good job and helping their company be successful. Many studies have shown that extrinsic rewards like grades and pay will, over time, destroy the intrinsic reward that comes from the work itself.
One Week Later
Sue was nervous as she entered the room for the ranking meeting. She had talked over her problem with Wayne, her boss, and although he didn’t have any easy solutions, he suggested that she present her problem to the management team. Shortly after the meeting started, Janice asked Sue how she would rank her team members. Sue took a deep breath, got a smile of encouragement from Wayne, and explained how the whole idea of ranking made no sense for a team effort. She explained how she had asked for advice from the team and ended up with an angry and suspicious team.
“You should never have talked to the team about this,” said Janice. “Hold on a minute,” Wayne jumped in. “I thought our goal in this company is to be fair. How can we keep our evaluation policies secret and expect people to consider them fair? It doesn’t matter if we think they are fair, it matters if employees think they are fair. If we think we can keep what we are doing a secret, we’re kidding ourselves. We need to be transparent about how we operate; we can’t make decisions behind closed doors and then try to tell people ‘don’t worry, we’re being fair’”
Sue was amazed at how fast the nature of the discussion changed after Wayne jumped to her defense. Apparently she wasn’t the only one who thought this ranking business was a bad idea. Everyone agreed that the New Venture team had done an excellent job, and the new product was key to their business. No one had thought that it could be done, and indeed the team as a whole had far exceeded everyone’s expectations. It became apparent that there wasn’t a person in the room who was willing to sort out who had contributed more or less to the effort, so Sue’s top evaluation for every team member was accepted. More importantly, the group was concerned that a de-motivated New Venture team was a serious problem. Eventually the vice president agreed to go to the next meeting of the New Venture team and discuss the company’s evaluation policies. Sue was sure that this would go a long way to revitalize the team spirit.
Now the management team had a problem of its own. The knew that they had to live within a merit pay system, but they suspected they needed to rethink the way it was implemented. Since changes like that don’t happen overnight, they formed a committee to look into various evaluation and pay systems.
The committee started by agreeing that evaluation systems should not be used to surprise employees with unexpected feedback about their performance. Performance feedback loops must be far shorter than an annual, or even a quarterly, evaluation cycles. Appraisals are good times to review and update development plans for an employee, but if this is the only time an employee finds out how they are doing, a lot more needs fixing than the appraisal system.
With this disclaimer in mind, the committee developed some guidelines for dealing with various forms of differential pay systems.
Guideline #1: Make Sure The Promotion System Is Unassailable
In most organizations, significant salary gains come from promotions which move people to a higher salary grade, not merit increases. Where promotions are not available, as is the case for many teachers, merit pay systems have a tendency to become contentious, because merit increases are the only way to make more money. When promotions are available, employees tend to ignore the merit pay system and focus on the promotion system. Of course this system of promotions tends to encourage people to move into management as they run out of promotional opportunities in technical areas. Companies address this problem with ‘dual ladders’ that offer management-level pay scales to technical gurus.
The foundation of any promotion system is a series of job grades, each with a salary range in line with industry standards and regional averages. People must be placed correctly in a grade so that their skills and responsibilities match the job requirements of their level. Initial placements and promotion decisions should be carefully made and reviewed by a management team.
Usually job grades are embedded in titles, and promotions make the new job grade public through a new title. Thus a person’s job grade is generally considered public information. If employees are fairly placed in their job grade, and promoted only when they are clearly performing at the new job grade, then salary differences based on job grade are generally perceived to be fair. Thus a team can have both senior and junior people, generalists and highly skilled specialists, all making different amounts of money. As long the system of determining job grades and promotions is transparent and perceived to be fair, this kind of differential pay is rarely a problem.
The management team at Sue’s company decided to focus on a promotion process that did not use either a ranking or a quota system. Instead, clear promotion criteria would be established for each level, and when someone had met the criteria, they would be eligible for promotion. A management committee would review each promotion proposal and gain a consensus that the promotion criteria were met. This would be similar to existing committees that reviewed promotions to fill open supervisor or management positions.
Guideline #2: De-emphasize The Merit Pay System
When the primary tool for significant salary increases is promotion, then it’s important to focus as much attention as possible on making sure the promotion system is fair. When it comes to the evaluation system that drives merit pay, it’s best not try too hard to sort people out. Studies show that when information sharing and coordination are necessary, organizations that reduce pay differences between the highest and the lowest paid employees tend to perform better over time.
Use evaluations mainly to keep everyone at an appropriate level in their salary grade. Evaluations might flag those who are ready for promotion and those who need attention, but that should trigger a separate promotion or corrective action process. About four evaluation grades are sufficient, and a competent supervisor with good evaluation criteria and input from appropriate sources can make fair evaluations that accomplish these purposes.
Even when annual raises are loosely coupled to merit, evaluations will always be a big deal for employees, so attention should be paid to making them fair and balanced. Over the last decade, balanced scorecards have become popular for management evaluations; at least in theory, balanced scorecards ensure that the multiple aspects of a manager’s job all receive attention. A simple version of a balanced scorecard might also be used for a merit pay evaluations, to emphasize the fact that people must perform well on many dimensions to be effective. A supervisor might develop a scorecard with each employee that takes into account team results, new competencies, leadership, and so on. It is important that employees perceive that the input to a scorecard is valid and fairly covers the multiple aspects of their job. It is important to keep things simple, because too much complexity will unduly inflate the attention paid to a pay system which works better when it is understated. Finally, scorecards should not be used to feed a ranking system.
Guideline #3: Tie Profit Sharing To Economic Drivers
Nucor Steel decided to get into the steel business in 1968, and thirty years later it was the biggest steel company in the US. When Nucor started up, Bethlehem Steel considered it a mere gnat, but 35 years later Bethlehem Steel was not only bankrupt, but sold off for assets. So Nucor Steel is one very successful company that has done a lot of things right in a tough industry. Quite surprisingly, Nucor has a decades-old tradition of paying for performance. How does the company avoid the dysfunctions of rewards?
Nucor Steel started with the realization that profit per ton of finished steel as its key economic driver, and based its profit sharing plan on the contribution a team makes to improving this number. So for example, a team that successfully develops a new steel-making process or starts up a new plant on schedule will not get see an increase in pay until the process or plant has improved the company’s profit per ton of steel. Thus Nucor avoids sub-optimization by tying its differential pay system as close to the economic driver of its business as possible.
Guideline # 4: Reward Based on Span of Influence, Not Span of Control
Conventional wisdom says that people should be evaluated based on results that are under their control. However, this kind of evaluation creates competition rather than collaboration. Nucor makes sure that its profit sharing formula rewards relatively large teams, not just the individuals or small groups who have direct responsibility for an area. Following this principle, if a software program creates a significant profit increase, everyone from those who brought the idea into the company to developers and testers to operations and support people to the end users should share in any reward.
Nucor Steel works hard to create a learning environment, where experts move from one plant to another, machine operators play a significant role in selecting and deploying new technology, and tacit knowledge spreads rapidly throughout the company. It’s reward system encourages knowledge sharing by rewarding people for influencing the success of areas they do not control.
How, exactly, can rewards be based on span of influence rather than span of control? I recommend a technique called ‘Measure UP’. No matter how hard you try to evaluate knowledge work or how good a scorecard you create, something will go unmeasured. Over time, the unmeasured area will be de-emphasized and problems will arise. We have a tendency to add more measurements to the scorecard to draw attention to the neglected areas.
However, it is a lot easier to catch everything that falls between the cracks by reducing the number of measurements and raising them to a higher level. For instance, instead of measuring software development with cost and schedule and earned value, try creating a P&L or ROI for the project, and help the team use these tools to drive tradeoff decisions.
Guideline #5: Find Better Motivators Than Money
While monetary rewards can be a powerful driver of behavior, the motivation they provide is not sustainable. Once people have an adequate income, motivation comes from things such as achievement, growth, control over one’s work, recognition, advancement, and a friendly working environment. No matter how good your evaluation and reward system may be, don’t expect it to do much to drive stellar performance over the long term.
In the book “Hidden Value” , Charles O’Reilly and Jeffrey Pfeffer present several case studies of companies that obtain superb performance from ordinary people. These companies have people-centered values which are aligned with actions at all levels. They invest in people, share information broadly, and rely on teams, and emphasize leadership rather than management. Finally, they do not use money as a primary motivator; they emphasize the intrinsic rewards of fun, growth, teamwork, challenge and accomplishment.
Treat monetary rewards like explosives because they have will a powerful impact whether you intend it or not. So use them lightly and with caution. They can get you in trouble much faster than they can solve your problems. Once you go down the path of monetary rewards, you may never be able to go back, even when they cease to be effective, as they inevitably will. Make sure that people are fairly and adequately compensated, and then move on to more effective ways to improve performance.
Six Months Later
The New Venture Team is having another celebration. They had all been surprised when the VP came to their team meeting six months earlier. But they quickly recovered and told her that they each wanted to be the best, they wanted to work with the best, and they did not appreciate the implication that some of them were better than others. When the VP left, the team cheered Sue for sticking up for them, and then got down to work with renewed enthusiasm. Now, two releases later, the customers were showing their appreciation with their pocketbooks.
There haven’t been any dramatic pay increases and only the occasional, well-deserved promotion. But the company has expanded its training budget and New Venture team members have found themselves mentoring other teams. Sue is rather proud of them all as she fills out the newly revised appraisal input forms that have more team-friendly evaluation criteria. This time Sue is confident that her judgment will not be questioned.
Tuesday, 10 August 2004
Thursday, 24 June 2004
An Introduction to Lean Software Development
Gustaf Brandberg, CEO and co-founder of Citerus, a software consulting firm in Uppsala, Sweden, conducted this interview, featured in PNEHM! #2 2004, a newsletter on software development in Swedish, produced and distributed by Citerus.
Gustaf Brandberg: What is Lean Software Development?
Mary Poppendieck: Lean Software Development is the application of Lean Thinking to the software development process. Organizations that are truly lean have a strong competitive advantage because they respond very rapidly and in a highly disciplined manner to market demand, rather than try to predict the future. Similarly, Lean Software Development is the discipline of creating software that readily adapts to changes in its domain. There is a pendulum that swings from one extreme to the other – first there was ad-hoc software development, which could not scale to large complex systems. Then the pendulum swung far to the other side and heavy process discipline was favored, but this proved unresponsive to change in a world where change is constant. Lean Software Development provides a middle ground: high discipline along with high responsiveness to change.
Gustaf: When did you first start working according to Lean principles? From where did you get the inspiration?
Mary: I was working in a manufacturing plant making video tapes in the mid 1980’s, and our Japanese competitors were selling video tapes at half of what it cost us to make them. We needed to understand how they could do this, and we discovered Just-in-Time production. We did two things: first, we provided every single worker in the plant with training in how a Just-in-Time flow works and had the workers design the details of a pull system. Then we stopped scheduling each workstation and instead sent a weekly schedule to the packing station. It worked like magic; inventory disappeared, quality improved, costs dropped, and customer response time was a week instead of a month.
Gustaf: You believe that there is no such thing as a 'best' practice. Why is that?
Mary: Frederick Winslow Taylor wrote The Principles of Scientific Management in 1911. In it, he proposed that manufacturing should be broken down into very small steps, and then industrial engineers should determine the ‘one best way’ to do each step. This ushered in the era of mass production, with ‘experts’ telling workers the ‘one best way’ to do their jobs.
The Toyota Production System is founded on the principles of the Scientific Method, instead of Scientific Management. The idea is that no matter how good a process is, it can always be improved, and that the workers doing the job are the best people to figure out how to do it better. In Lean Production, workers learn how to create a hypothesis, test it, analyze the results, and – if the data supports the hypothesis – make the change permanent.
Software development covers a lot of territory, and no matter how good a practice may be, it will not apply universally across all software development environments. Moreover, even where a practice does apply, it can and should always be improved upon. There are certainly underlying principles that do not change. These principles will develop into different practices in different domains, driven by the economic reality of each environment.
Gustaf: What are these core principles in Lean Software Development you are referring to?
Mary:
Mary: Something like 45% of the features in a typical software system are never used, another 19% are rarely used. That means two thirds of the software in a typical system is waste. Eliminating this waste is the first place to look for reducing software waste. We need to focus on creating more value with less effort, and make sure that the resulting systems do not turn into legacy software.
Gustaf: If what you are saying is true, all that needless functionality is certainly a waste. How come so many features are never or rarely used?
Mary: Quite often at the beginning of a software development project, we ask customers what they want, even though they don’t really know. We make it clear to customers that they need to tell us about everything they might possibly want, and quite often we record their wish list without question. This is what we call ‘Scope’. Later, if customers want to add or change items in the ‘Scope’ we challenge them with a ‘Change Review Process’. So we reward customers for coming up with a long initial list of features, and punish them if they want to modify the list at a later stage. Is it any wonder that two thirds of the features in a system developed playing this game are rarely or never used?
Gustaf: So, I realize putting in extra features is a big source of waste. In your book, you write that another way of discovering waste is mapping your value stream. What is a value stream?
Mary: In the physical product world, a value stream is the flow of a product from raw material to final use. For instance, a Cola can starts out as bauxite, is reduced to alumina, smelted into aluminum ingots, rolled into sheets, rolled again into thin strips, stamped into disks, formed into cans, cleaned and painted, filled with Cola and sealed, packaged and put on a pallet, warehoused at a distributor, sent to a retail store, put on a shelf, purchased by a consumer, stored in a refrigerator, and finally the Cola is consumed. At every step, a huge pile of inventory builds up waiting for the next step, which is days or weeks away. This value stream takes 319 days, of which only 3 hours (less than .04%) are spent actually making the product.
In the software world, a value stream starts when a business unit decides that better information would help it increase revenue or reduce cost. This is the start of the value stream. The value stream ends when the deployed software starts generating the extra revenue or reducing costs. Inventory in the software development value stream is partially done work: requirements that are not analyzed and designed, designs that are not coded, code that is not tested and integrated, features that are not deployed, and deployed features that are not saving money or reducing costs. When the software value stream has as little of this partially done work as possible, risks are reduced and productivity is greatly
improved.
Gustaf: Another principle is delaying commitment. Isn't the project schedule in danger of slipping when no one dares make a decision?
Mary: A military officer who was about to retire once said: ‘The most important thing I did in my career was to teach young leaders that whenever they saw a threat, their first job was to determine the timebox for their response. Their second job was to hold off making a decision until the end of the timebox, so that they could make it based on the best possible data.’
Our natural tendency is to make decisions and get them over with. However, it is far better to determine the timebox for every decision, and then make the decision at the end of the timebox, because then we can make decisions based on the best possible data. In Lean Software Development, decisions are not avoided; they are scheduled and made at the last responsible moment. This assures that all decisions are made in a timely manner, yet they are made with as much information as possible to help make the best decision possible.
Gustaf: What do you mean by ‘integrity’? Why is it so important to maintain it?
Mary: Integrity is a level above quality; it is the thing that makes people want to use a product. For instance, there are many high quality search engines, but only one Google. There is something about Google that attracts people to use it every day. That is integrity. Software is never an end in itself, it is always a means to an end. It gets used when it provides the best means to the end. To maintain integrity, software needs regular updates to make sure that it continues to provide the best way for users to achieve their goals, even as the goals and the technology changes. Otherwise the software becomes irrelevant at best, or legacy at worst.
Gustaf: Why have not all organizations been successful in applying Lean Thinking? What are the most frequent pitfalls?
Mary: At its core, Lean Thinking means creating a learning environment for workers in order to increase the flow of value. All too often, practices from successful Lean companies are adopted, but the heart of Lean Thinking is lost. If people, learning and value are not the central focus of a Lean initiative, it will not be particularly successful.
Last week I explained to a president of a small software company how to implement responsibility-based planning and control: schedule releases and iterations, make sure that the development team agrees at the start on what features will be included, and then leave it to the team to meet their commitment. This made the president of the company uneasy. He was already using iterations, but he wasn’t quite ready to give up the responsibility for planning and tracking tasks to the development team.
Think of an emergency response team, firefighters or paramedics for example. They are trained in emergency scenarios that establish patterns for responding to anything they are likely to encounter. When an emergency occurs, there is no time for decisions to go up the chain of command and back down; emergency responders are expected to use their own judgment to deal with an emergency as they confront it. Similarly in a Lean organization, everyone is trained and equipped to do their job, and the organization is structured so that it is clear what needs to be accomplished. But it is the workers who make decisions about what to do, track their own work, and assume the responsibility for meeting their goals.
Gustaf: This sounds a lot like Scrum to me. In Scrum, the team is responsible for managing itself. How are Agile methodologies such as Scrum and Extreme Programming related to Lean Software Development?
Mary: Both Scrum and Extreme Programming (XP) are examples of Lean Thinking applied to developing software. Scrum is an excellent approach to responsibility-based planning and control. XP is a tremendous set of disciplines that enable rapid, repeatable, reliable delivery of code. I particularly like XP’s focus on testing, continuous integration and refactoring. I think of refactoring as constantly improving the code base – it’s sort of like applying kaizen (the Japanese word for continuous improvement) to a software system.
Gustaf: Where can I learn more about Lean Software Development?
Mary: A good start would be my book: Lean Software Development: An Agile Toolkit. Other recommended reading is Lean Thinking: Banish Waste and Create Wealth in Your Corporation, 2nd edition (by Jones & Womack), Slack: Getting Past Burnout, Busywork, and the Myth of Total Efficiency (DeMarco) and Product Development for the Lean Enterprise (Kennedy). Also,check my own website and articles on Lean Development at Agile Alliance.
Screen Beans Art, © A Bit Better Corporation
Gustaf Brandberg: What is Lean Software Development?
Mary Poppendieck: Lean Software Development is the application of Lean Thinking to the software development process. Organizations that are truly lean have a strong competitive advantage because they respond very rapidly and in a highly disciplined manner to market demand, rather than try to predict the future. Similarly, Lean Software Development is the discipline of creating software that readily adapts to changes in its domain. There is a pendulum that swings from one extreme to the other – first there was ad-hoc software development, which could not scale to large complex systems. Then the pendulum swung far to the other side and heavy process discipline was favored, but this proved unresponsive to change in a world where change is constant. Lean Software Development provides a middle ground: high discipline along with high responsiveness to change.
Gustaf: When did you first start working according to Lean principles? From where did you get the inspiration?
Mary: I was working in a manufacturing plant making video tapes in the mid 1980’s, and our Japanese competitors were selling video tapes at half of what it cost us to make them. We needed to understand how they could do this, and we discovered Just-in-Time production. We did two things: first, we provided every single worker in the plant with training in how a Just-in-Time flow works and had the workers design the details of a pull system. Then we stopped scheduling each workstation and instead sent a weekly schedule to the packing station. It worked like magic; inventory disappeared, quality improved, costs dropped, and customer response time was a week instead of a month.
Gustaf: You believe that there is no such thing as a 'best' practice. Why is that?
Mary: Frederick Winslow Taylor wrote The Principles of Scientific Management in 1911. In it, he proposed that manufacturing should be broken down into very small steps, and then industrial engineers should determine the ‘one best way’ to do each step. This ushered in the era of mass production, with ‘experts’ telling workers the ‘one best way’ to do their jobs.
The Toyota Production System is founded on the principles of the Scientific Method, instead of Scientific Management. The idea is that no matter how good a process is, it can always be improved, and that the workers doing the job are the best people to figure out how to do it better. In Lean Production, workers learn how to create a hypothesis, test it, analyze the results, and – if the data supports the hypothesis – make the change permanent.
Software development covers a lot of territory, and no matter how good a practice may be, it will not apply universally across all software development environments. Moreover, even where a practice does apply, it can and should always be improved upon. There are certainly underlying principles that do not change. These principles will develop into different practices in different domains, driven by the economic reality of each environment.
Gustaf: What are these core principles in Lean Software Development you are referring to?
Mary:
- Eliminate Waste – Do only what adds value for a customer, and do it without delay.
- Amplify Learning – Use frequent iterations and regular releases to provide feedback.
- Delay Commitment – Make decisions at the last responsible moment.
- Deliver Fast – The measure of the maturity of an organization is the speed at which it can repeatedly and reliably respond to customer need.
- Empower the Team – Assemble an expert workforce, provide technical leadership and delegate the responsibility to the workers.
- Build Integrity In – Have the disciplines in place to assure that a system will delight customers both upon initial delivery and over the long term.
- See the Whole – Use measurements and incentives focused on achieving the overall goal.
Mary: Something like 45% of the features in a typical software system are never used, another 19% are rarely used. That means two thirds of the software in a typical system is waste. Eliminating this waste is the first place to look for reducing software waste. We need to focus on creating more value with less effort, and make sure that the resulting systems do not turn into legacy software.
Gustaf: If what you are saying is true, all that needless functionality is certainly a waste. How come so many features are never or rarely used?
Mary: Quite often at the beginning of a software development project, we ask customers what they want, even though they don’t really know. We make it clear to customers that they need to tell us about everything they might possibly want, and quite often we record their wish list without question. This is what we call ‘Scope’. Later, if customers want to add or change items in the ‘Scope’ we challenge them with a ‘Change Review Process’. So we reward customers for coming up with a long initial list of features, and punish them if they want to modify the list at a later stage. Is it any wonder that two thirds of the features in a system developed playing this game are rarely or never used?
Gustaf: So, I realize putting in extra features is a big source of waste. In your book, you write that another way of discovering waste is mapping your value stream. What is a value stream?
Mary: In the physical product world, a value stream is the flow of a product from raw material to final use. For instance, a Cola can starts out as bauxite, is reduced to alumina, smelted into aluminum ingots, rolled into sheets, rolled again into thin strips, stamped into disks, formed into cans, cleaned and painted, filled with Cola and sealed, packaged and put on a pallet, warehoused at a distributor, sent to a retail store, put on a shelf, purchased by a consumer, stored in a refrigerator, and finally the Cola is consumed. At every step, a huge pile of inventory builds up waiting for the next step, which is days or weeks away. This value stream takes 319 days, of which only 3 hours (less than .04%) are spent actually making the product.
In the software world, a value stream starts when a business unit decides that better information would help it increase revenue or reduce cost. This is the start of the value stream. The value stream ends when the deployed software starts generating the extra revenue or reducing costs. Inventory in the software development value stream is partially done work: requirements that are not analyzed and designed, designs that are not coded, code that is not tested and integrated, features that are not deployed, and deployed features that are not saving money or reducing costs. When the software value stream has as little of this partially done work as possible, risks are reduced and productivity is greatly
improved.
Gustaf: Another principle is delaying commitment. Isn't the project schedule in danger of slipping when no one dares make a decision?
Mary: A military officer who was about to retire once said: ‘The most important thing I did in my career was to teach young leaders that whenever they saw a threat, their first job was to determine the timebox for their response. Their second job was to hold off making a decision until the end of the timebox, so that they could make it based on the best possible data.’
Our natural tendency is to make decisions and get them over with. However, it is far better to determine the timebox for every decision, and then make the decision at the end of the timebox, because then we can make decisions based on the best possible data. In Lean Software Development, decisions are not avoided; they are scheduled and made at the last responsible moment. This assures that all decisions are made in a timely manner, yet they are made with as much information as possible to help make the best decision possible.
Gustaf: What do you mean by ‘integrity’? Why is it so important to maintain it?
Mary: Integrity is a level above quality; it is the thing that makes people want to use a product. For instance, there are many high quality search engines, but only one Google. There is something about Google that attracts people to use it every day. That is integrity. Software is never an end in itself, it is always a means to an end. It gets used when it provides the best means to the end. To maintain integrity, software needs regular updates to make sure that it continues to provide the best way for users to achieve their goals, even as the goals and the technology changes. Otherwise the software becomes irrelevant at best, or legacy at worst.
Gustaf: Why have not all organizations been successful in applying Lean Thinking? What are the most frequent pitfalls?
Mary: At its core, Lean Thinking means creating a learning environment for workers in order to increase the flow of value. All too often, practices from successful Lean companies are adopted, but the heart of Lean Thinking is lost. If people, learning and value are not the central focus of a Lean initiative, it will not be particularly successful.
Last week I explained to a president of a small software company how to implement responsibility-based planning and control: schedule releases and iterations, make sure that the development team agrees at the start on what features will be included, and then leave it to the team to meet their commitment. This made the president of the company uneasy. He was already using iterations, but he wasn’t quite ready to give up the responsibility for planning and tracking tasks to the development team.
Think of an emergency response team, firefighters or paramedics for example. They are trained in emergency scenarios that establish patterns for responding to anything they are likely to encounter. When an emergency occurs, there is no time for decisions to go up the chain of command and back down; emergency responders are expected to use their own judgment to deal with an emergency as they confront it. Similarly in a Lean organization, everyone is trained and equipped to do their job, and the organization is structured so that it is clear what needs to be accomplished. But it is the workers who make decisions about what to do, track their own work, and assume the responsibility for meeting their goals.
Gustaf: This sounds a lot like Scrum to me. In Scrum, the team is responsible for managing itself. How are Agile methodologies such as Scrum and Extreme Programming related to Lean Software Development?
Mary: Both Scrum and Extreme Programming (XP) are examples of Lean Thinking applied to developing software. Scrum is an excellent approach to responsibility-based planning and control. XP is a tremendous set of disciplines that enable rapid, repeatable, reliable delivery of code. I particularly like XP’s focus on testing, continuous integration and refactoring. I think of refactoring as constantly improving the code base – it’s sort of like applying kaizen (the Japanese word for continuous improvement) to a software system.
Gustaf: Where can I learn more about Lean Software Development?
Mary: A good start would be my book: Lean Software Development: An Agile Toolkit. Other recommended reading is Lean Thinking: Banish Waste and Create Wealth in Your Corporation, 2nd edition (by Jones & Womack), Slack: Getting Past Burnout, Busywork, and the Myth of Total Efficiency (DeMarco) and Product Development for the Lean Enterprise (Kennedy). Also,check my own website and articles on Lean Development at Agile Alliance.
Screen Beans Art, © A Bit Better Corporation
Saturday, 13 March 2004
Why the Lean in Lean Six Sigma?
A free-wheeling mid-sized company ran smack up against the Sarbanes-Oxley Act a few months ago and found that it had to pause and take stock.[1] The company commissioned a small team to map out every place that financial data moved, and to no one’s surprise, it uncovered a lot of un-coordinated, manual processes. Next the team put into place some simple automation that managed inventory and financial data, an online order entry system, and even some automated human resources capabilities. The company found that with these more streamlined processes, days were cut out of order processing, inventories were reduced, and productivity was significantly improved.
The irony is that the fast-moving company had to slow down to speed up; more discipline led to higher speed. This is not an isolated case, there is a direct relationship between speed and discipline. Adding discipline should streamline a process, and streamlined processes don’t work without discipline.
When I first heard the term Lean Six Sigma, I wondered what Lean added to Six Sigma. I found that the answer is speed. The first principle of Lean Six Sigma is: Delight your customers with speed and quality. The second principle says: Improve process flow and speed. Lean Six Sigma emphasizes that speed is directly tied to excellence.
Speed is not the same thing as schedule. Schedule is about when something is supposed to get done, speed is about how fast it gets done. Speed has a bad reputation; it is often equated with hasty, undisciplined work. But if Lean Six Sigma has anything to teach us, it is that we should be looking for opportunities to streamline our core processes. This does not mean we should be compressing already tight schedules. It means that we first determine what our core processes are, and then focus on making them flow smoothly.
For example, core processes in software development would be naming conventions and coding standards, a configuration management system, an automated build process, a suite of automated unit tests that are built and maintained as part of the code, daily build/integrate/test cycles, acceptance testing integrated into the development process, and usability testing immediately after the features are implemented. Assuring that these disciplines are in place is fundamental to the smooth flow of any software development process.
However, the most important process to streamline in a development project is the knowledge-creating process. Whether we are developing a new product or a new software system, the fundamental thing we are doing is discovering what needs to be in the system in order to delight the customer. Lean thinking supports two basic disciplines for speeding up the knowledge creation process: short, frequent learning cycles and delayed commitment.
Short, frequent learning cycles are the antithesis of thorough front-end planning, but they are the best approach to processes which have the word ‘development’ in their title. In product development, for example, the typical process is to define requirements, choose a solution, refine the solution, and implement the solution. Although this process is common, it is not the best way to generate knowledge.
Toyota has a different approach, one that is much faster and delivers products of superior quality that consistently outsell the competition.[2] Toyota builds sets of possibilities to satisfy customer needs and then, through a series of combining and narrowing, the new product emerges. The combining and narrowing process is paced by timeboxed milestones that define stages of the narrowing process. Milestones are always met, despite the fact that there are no task breakouts or tracking. Decisions are delayed as long as possible, so that they can be based on the maximum amount of information.
When a project involves knowledge creation, rather than just knowledge replication, speed and quality come from improving the flow of creating knowledge. Many of our project management practices have a tendency to impede knowledge creation by forcing early choices and reducing the number of possibilities explored. This often leads us down bind alleys: after a lot of work has gone into a solution, we learn that it has a fatal flaw and we have to retrace our steps and repeat a whole lot of work.
The key to streamlining a development process is to clearly distinguish between true knowledge creating iterations and iterations that lead down blind alleys. Knowledge creating iterations explore multiple options and leave as many possibilities open as possible, delaying decisions until the last responsible moment.
Returning to our software development example, lean practices promote speed and flexibility by implementing core disciplines that promote change tolerance and allow decisions to be delayed as long as possible. Lean software development changes the focus from gathering requirements to encoding all requirements in tests. It introduces the concept of refactoring, that is, creating a simple design at the beginning of development to handle early requirements, and then improving the design later as more requirements are discovered. Finally, lean software development requires full testing and integration of code as soon as it is developed, on a daily basis at minimum. The result is easily maintainable code that has been designed to be flexible and built to be rapidly changed.
In lean software development, scope is not set at the beginning; small features sets are added based on priority determined by their ROI.[3] This tends to lead to a significant increase in both speed and productivity for a simple reason: most of the features we put into software systems are never going to be used. How can this be? When we freeze scope early, we encourage our customers, who don’t really know what they want, to ask for everything they can imagine. When we delay commitment on scope until we are well into the knowledge generating process, we end up reducing scope down to the minimum set that is really going to pay off.
If we’re not careful, Six Sigma might lead us to apply practices aimed at improving replication processes to knowledge generating processes. This often leads to slow, unresponsive, change-intolerant practices that are not appropriate for knowledge creation. Quite often the slow, deliberate nature of these practices is mistaken to be a sign of good discipline. But when we add Lean to Six Sigma, we discover that we need to re-think these slow processes, because we have come to understand that speed, discipline, and excellence go hand-in-hand.
References
[1] “Better Safe Than Sorry,” by Jeffrey Rothfeder, CIO Insight, February 2004
[2] Product Development for the Lean Enterprise, by Michael N. Kennedy, Oaklea Press, 2003.
[3] Software by Numbers, by Mark Denne and Jane Cleland-Huang, Prentice Hall, 2004
Screen Beans Art, © A Bit Better Corporation
The irony is that the fast-moving company had to slow down to speed up; more discipline led to higher speed. This is not an isolated case, there is a direct relationship between speed and discipline. Adding discipline should streamline a process, and streamlined processes don’t work without discipline.
When I first heard the term Lean Six Sigma, I wondered what Lean added to Six Sigma. I found that the answer is speed. The first principle of Lean Six Sigma is: Delight your customers with speed and quality. The second principle says: Improve process flow and speed. Lean Six Sigma emphasizes that speed is directly tied to excellence.
Speed is not the same thing as schedule. Schedule is about when something is supposed to get done, speed is about how fast it gets done. Speed has a bad reputation; it is often equated with hasty, undisciplined work. But if Lean Six Sigma has anything to teach us, it is that we should be looking for opportunities to streamline our core processes. This does not mean we should be compressing already tight schedules. It means that we first determine what our core processes are, and then focus on making them flow smoothly.
For example, core processes in software development would be naming conventions and coding standards, a configuration management system, an automated build process, a suite of automated unit tests that are built and maintained as part of the code, daily build/integrate/test cycles, acceptance testing integrated into the development process, and usability testing immediately after the features are implemented. Assuring that these disciplines are in place is fundamental to the smooth flow of any software development process.
However, the most important process to streamline in a development project is the knowledge-creating process. Whether we are developing a new product or a new software system, the fundamental thing we are doing is discovering what needs to be in the system in order to delight the customer. Lean thinking supports two basic disciplines for speeding up the knowledge creation process: short, frequent learning cycles and delayed commitment.
Short, frequent learning cycles are the antithesis of thorough front-end planning, but they are the best approach to processes which have the word ‘development’ in their title. In product development, for example, the typical process is to define requirements, choose a solution, refine the solution, and implement the solution. Although this process is common, it is not the best way to generate knowledge.
Toyota has a different approach, one that is much faster and delivers products of superior quality that consistently outsell the competition.[2] Toyota builds sets of possibilities to satisfy customer needs and then, through a series of combining and narrowing, the new product emerges. The combining and narrowing process is paced by timeboxed milestones that define stages of the narrowing process. Milestones are always met, despite the fact that there are no task breakouts or tracking. Decisions are delayed as long as possible, so that they can be based on the maximum amount of information.
When a project involves knowledge creation, rather than just knowledge replication, speed and quality come from improving the flow of creating knowledge. Many of our project management practices have a tendency to impede knowledge creation by forcing early choices and reducing the number of possibilities explored. This often leads us down bind alleys: after a lot of work has gone into a solution, we learn that it has a fatal flaw and we have to retrace our steps and repeat a whole lot of work.
The key to streamlining a development process is to clearly distinguish between true knowledge creating iterations and iterations that lead down blind alleys. Knowledge creating iterations explore multiple options and leave as many possibilities open as possible, delaying decisions until the last responsible moment.
Returning to our software development example, lean practices promote speed and flexibility by implementing core disciplines that promote change tolerance and allow decisions to be delayed as long as possible. Lean software development changes the focus from gathering requirements to encoding all requirements in tests. It introduces the concept of refactoring, that is, creating a simple design at the beginning of development to handle early requirements, and then improving the design later as more requirements are discovered. Finally, lean software development requires full testing and integration of code as soon as it is developed, on a daily basis at minimum. The result is easily maintainable code that has been designed to be flexible and built to be rapidly changed.
In lean software development, scope is not set at the beginning; small features sets are added based on priority determined by their ROI.[3] This tends to lead to a significant increase in both speed and productivity for a simple reason: most of the features we put into software systems are never going to be used. How can this be? When we freeze scope early, we encourage our customers, who don’t really know what they want, to ask for everything they can imagine. When we delay commitment on scope until we are well into the knowledge generating process, we end up reducing scope down to the minimum set that is really going to pay off.
If we’re not careful, Six Sigma might lead us to apply practices aimed at improving replication processes to knowledge generating processes. This often leads to slow, unresponsive, change-intolerant practices that are not appropriate for knowledge creation. Quite often the slow, deliberate nature of these practices is mistaken to be a sign of good discipline. But when we add Lean to Six Sigma, we discover that we need to re-think these slow processes, because we have come to understand that speed, discipline, and excellence go hand-in-hand.
References
[1] “Better Safe Than Sorry,” by Jeffrey Rothfeder, CIO Insight, February 2004
[2] Product Development for the Lean Enterprise, by Michael N. Kennedy, Oaklea Press, 2003.
[3] Software by Numbers, by Mark Denne and Jane Cleland-Huang, Prentice Hall, 2004
Screen Beans Art, © A Bit Better Corporation
Friday, 12 March 2004
Product Development for the Lean Enterprise (Book Review)
“How could a business book keep me up until 2:30 in the morning?” I wondered as I collapsed into bed. True, it was a business novel, so it had engaging characters, a hint of a plot, and actual villains. But that wasn’t what kept me reading into the wee hours of the morning. All through the book I kept applauding Michael Kennedy for doing such an excellent job of showing how to apply lean thinking to product development. “He gets it!” I kept saying to myself. “He understands that product development is a whole different ballgame than manufacturing.”
The book’s narrator is three weeks from retirement and coasting. His boss has just been asked to take over a flagging engineering department, and our narrator gets a week to assemble a swat team and figure out how to revamp the product development process. Buried deep in the organization is an engineer who has studied how Toyota does product development. We follow her as she first convinces the swat team and then the executives that they need to change the way they think about product development.
We learn that product development is a knowledge-creating process, and learn what that means: Entrepreneurial leadership, responsibility-based planning, expert workforce, and set-based concurrent engineering. Before you fall asleep, note that the novel format helps make these concepts come alive. The swat team has to digest a new paradigm for product development, and then sell it to an executive who has the mistaken notion that lean product development should be pretty much like lean manufacturing.
After the book convinces you that you can’t treat development like production, it goes on to describe in detail what does work for development in an understandable and practical way. I learned a lot about set-based design, and I really liked the description of responsibility-based planning and control.
The book ends with a ‘where do we go from here’ section, offering large group interventions as a way to trigger participative change. The book didn’t offer a lot of guidance on exactly what to do, but it does draw an interesting parallel between the desired new product development process and the change process itself.
Business novels, by their nature, provide ideas and examples, rather than specifics of how proceed in situations that differ from the ideal presented in the novel. But a well-written business novels such as this one provide good way to clarify important ideas in a quick-to-read, entertaining style. It’s easier to attack sacred cows in fiction, and easier help the reader visualize what might happen if a real paradigm shift takes place. If you liked Goldratt’s novel “The Goal” then you might stay up most of the night reading this book, just like I did.
Reference
Product Development for the Lean Enterprise by Michael N. Kennedy, Oakela Press, 2003.
The book’s narrator is three weeks from retirement and coasting. His boss has just been asked to take over a flagging engineering department, and our narrator gets a week to assemble a swat team and figure out how to revamp the product development process. Buried deep in the organization is an engineer who has studied how Toyota does product development. We follow her as she first convinces the swat team and then the executives that they need to change the way they think about product development.
We learn that product development is a knowledge-creating process, and learn what that means: Entrepreneurial leadership, responsibility-based planning, expert workforce, and set-based concurrent engineering. Before you fall asleep, note that the novel format helps make these concepts come alive. The swat team has to digest a new paradigm for product development, and then sell it to an executive who has the mistaken notion that lean product development should be pretty much like lean manufacturing.
After the book convinces you that you can’t treat development like production, it goes on to describe in detail what does work for development in an understandable and practical way. I learned a lot about set-based design, and I really liked the description of responsibility-based planning and control.
The book ends with a ‘where do we go from here’ section, offering large group interventions as a way to trigger participative change. The book didn’t offer a lot of guidance on exactly what to do, but it does draw an interesting parallel between the desired new product development process and the change process itself.
Business novels, by their nature, provide ideas and examples, rather than specifics of how proceed in situations that differ from the ideal presented in the novel. But a well-written business novels such as this one provide good way to clarify important ideas in a quick-to-read, entertaining style. It’s easier to attack sacred cows in fiction, and easier help the reader visualize what might happen if a real paradigm shift takes place. If you liked Goldratt’s novel “The Goal” then you might stay up most of the night reading this book, just like I did.
Reference
Product Development for the Lean Enterprise by Michael N. Kennedy, Oakela Press, 2003.
Friday, 20 February 2004
Incremental Funding (Book Review)
The customer wanted an on-line currency exchange capability added to their on-line financial service offerings. They figured it would take several months to implement. But the development team suggested a different approach: start by hiring a dozen telephone operators and implement the necessary software for these folks to execute currency trades. The company gave it a try, and in six weeks the first iteration was ready. With no more than an 800 number on their web site and a rudimentary interface to the currency market, new business was being transacted and profits being made.
Over the course of the next several months, the on-line trading capability was implemented around the core module originally used by the telephone operators. Not only did the company see early revenue, but the risk of failure disappeared once trading started. In addition, system requirements were defined by observing real trades.
In the book Software by Numbers – Low Risk, High Return Development (Prentice Hall, 2004) Mark Denne and Jane Cleland-Huang make the case for incremental delivery of software. Mark Denne developed the Incremental Funding Methodology (IFM) in the 1990’s to help land a large software development contract. In an attempt to distinguish his bid from the pack, he reorganized the deliveries into units of value, and adjusted the development sequence so that the customer would realize revenue faster and in the end, receive a greater return on their investment. The customer discovered that this approach dramatically reduced their need to borrow money and gave them earlier product release with lowered risk. In what seemed like hotly competitive bidding process, Mark’s company won the bid by emphasizing “time to value” instead of development efficiency.
Software by Numbers recommends dividing a project into Minimum Marketable Features (MMF’s). These are small feature sets which deliver some identifiable value to the customer. Each MMF should have its own return on investment (ROI). By laying out the potential ROI’s of various feature sets, an optimal development sequence for the MMF’s can be determined. Early deployment of key MMF’s reduces risk while generating revenue to help fund the remainder of the project.
When business people and software developers focus on identifying and valuing marketable features, their conversation is changed. Developers are exposed to ROI and stakeholders are confronted with the realities of software development. The entire team is focused on achieving business ROI early in the development cycle. Management sees continuously measurable progress, and the team benefits from the early and compelling feedback generated by real software being used in production.
With so many benefits, why wouldn’t IFM be the preferred approach for developing software? Denne and Cleland-Huang note that many practitioners view software architecture as a monolithic whole, requiring early definition because of the extensive impact that architectural changes can have on a system. On the other hand, they argue, it is not until the details of an architecture are implemented that one can tell if the architecture is viable. Thus architecture presents us with a chicken and egg problem.
The book recommends that architectural elements which support each set of MMF’s should be developed with their respective MMF’s. In other words, architectural development should be sequenced using the same financially driven priorities as feature development. While there may be times where architectural coherency may dictate early development of features not immediately related to the current feature set, in general it is not only possible but also preferable to defer implementation of architectural elements until the features requiring these elements is being developed.
As we discover that architecture not only can evolve, but in fact, in any deployed system, the architecture will evolve over time, the view that architecture must be fixed early in the development process becomes a liability. This is a view that creates architectures that are not tolerant of the change they inevitably must undergo. Once we accept that software architectures must be designed to be change-tolerant, the barrier to early deployment of high value features is lowered.
Returning to the currency trading example, we see that early deployment of marketable features is a compelling strategy for increasing return and reducing risk. At the same time the stage is set for a better understanding of the real system requirements and improved collaboration between developers and their customers.
Reference
Software by Numbers: Low-Risk, High-Return Development by Mark Denne, Jane Cleland-Huang, Prentice Hall, 2004
Screen Beans Art, © A Bit Better Corporation
Over the course of the next several months, the on-line trading capability was implemented around the core module originally used by the telephone operators. Not only did the company see early revenue, but the risk of failure disappeared once trading started. In addition, system requirements were defined by observing real trades.
In the book Software by Numbers – Low Risk, High Return Development (Prentice Hall, 2004) Mark Denne and Jane Cleland-Huang make the case for incremental delivery of software. Mark Denne developed the Incremental Funding Methodology (IFM) in the 1990’s to help land a large software development contract. In an attempt to distinguish his bid from the pack, he reorganized the deliveries into units of value, and adjusted the development sequence so that the customer would realize revenue faster and in the end, receive a greater return on their investment. The customer discovered that this approach dramatically reduced their need to borrow money and gave them earlier product release with lowered risk. In what seemed like hotly competitive bidding process, Mark’s company won the bid by emphasizing “time to value” instead of development efficiency.
Software by Numbers recommends dividing a project into Minimum Marketable Features (MMF’s). These are small feature sets which deliver some identifiable value to the customer. Each MMF should have its own return on investment (ROI). By laying out the potential ROI’s of various feature sets, an optimal development sequence for the MMF’s can be determined. Early deployment of key MMF’s reduces risk while generating revenue to help fund the remainder of the project.
When business people and software developers focus on identifying and valuing marketable features, their conversation is changed. Developers are exposed to ROI and stakeholders are confronted with the realities of software development. The entire team is focused on achieving business ROI early in the development cycle. Management sees continuously measurable progress, and the team benefits from the early and compelling feedback generated by real software being used in production.
With so many benefits, why wouldn’t IFM be the preferred approach for developing software? Denne and Cleland-Huang note that many practitioners view software architecture as a monolithic whole, requiring early definition because of the extensive impact that architectural changes can have on a system. On the other hand, they argue, it is not until the details of an architecture are implemented that one can tell if the architecture is viable. Thus architecture presents us with a chicken and egg problem.
The book recommends that architectural elements which support each set of MMF’s should be developed with their respective MMF’s. In other words, architectural development should be sequenced using the same financially driven priorities as feature development. While there may be times where architectural coherency may dictate early development of features not immediately related to the current feature set, in general it is not only possible but also preferable to defer implementation of architectural elements until the features requiring these elements is being developed.
As we discover that architecture not only can evolve, but in fact, in any deployed system, the architecture will evolve over time, the view that architecture must be fixed early in the development process becomes a liability. This is a view that creates architectures that are not tolerant of the change they inevitably must undergo. Once we accept that software architectures must be designed to be change-tolerant, the barrier to early deployment of high value features is lowered.
Returning to the currency trading example, we see that early deployment of marketable features is a compelling strategy for increasing return and reducing risk. At the same time the stage is set for a better understanding of the real system requirements and improved collaboration between developers and their customers.
Reference
Software by Numbers: Low-Risk, High-Return Development by Mark Denne, Jane Cleland-Huang, Prentice Hall, 2004
Screen Beans Art, © A Bit Better Corporation
Sunday, 1 February 2004
Toward A New Definition of Maturity
In the mid 90's, a company named Zeos assembled PC's in my home state of Minnesota. I was impressed when Zeos was named as a finalist for the Malcolm Baldrige Award, because competing for this quality award is more or less the equivalent of a software company trying to reach CMM Level 5. At the same time that Zeos was focusing on the Malcolm Baldrige Award, a similar company in Austin, Texas, called Dell Computer, was focusing all of its energy on two rather different objectives: 1) keep inventory as low as possible, because it is the biggest risk in the PC assembly business, and 2) deliver computers just as fast as possible after customers decide what they want. Dell could not possibly meet these goals unless it had the key Malcolm Baldrige capabilities in place, but that was not its focus. On the other hand, in order to be a finalist, Zeos had to spend huge amounts of executive and management attention on Malcolm Baldrige activities. So which of these two companies would you call the most mature?
A company does not rise to the top of its industry by perfecting its normative procedures. While General Motors was busy refining it’s four phase process for product development, Toyota and Honda were developing cars in something approaching half the time, for half the cost. The resulting cars were less expensive to produce and captured a much larger market share. Yet when looked at through Detroit’s eyes, the development approach of the Japanese companies was decidedly immature – why, they even let die makers start cutting dies long before the final drawings were released!
The problem with maturity models is that they foster a mental model of ‘good practice’ that can block out paradigm shifts that are destined to reshape the industry. This has happened in manufacturing. It has happened in product development. Can it be happening in software development today?
Why Do We Do This To Ourselves?
We human beings a tendency to decompose complex problems into smaller pieces, and then focus on the individual pieces, often, unwittingly, at the expense of the whole. One reason for this is that people can only hold a few chunks of information in short term memory at once; Miller’s law claims that this number is seven plus or minus two. Thus we have a strong tendency to take concepts which we cannot get our minds around and decompose them into parts, so we can deal with one part at a time. There is nothing wrong with our very natural tendency to decompose problems, except that we often forget to reconfigure the parts into a whole and take their interactions into account.
The problem is, decomposition only works if the whole is indeed equal to the sum of its parts, and interactions between the parts can be ignored. In practice, this is rarely the case; in fact, optimization of the parts has a tendency to sub-optimize the whole. For example, a manager might think that the best way to run a testing department is to make sure that every single person is working all of the time. In order to guarantee maximum productivity of each individual, he makes sure there is always a pile of testing waiting to be done. Down the hall, the operations manager knows better. She knows that if she runs the servers at high utilization, the entire system bogs down, just like rush hour traffic. If only the testing manager would realize that his policy of full utilization of testing resources creates the same kind of traffic jam in software development!
Decomposition of a problem into pieces is a standard problem solving heuristic, but it needs to be accompanied by regular aggregation of the parts into a whole. It is here that iterative development shines, because it forces us to develop, test, integrate, and release complete threads of the system early and often. Thus we reset our view of the big picture every iteration. However, iterative development requires that we decompose the problem space along dimensions that are orthogonal to those traditionally recommended. Instead of decomposing the problem into requirements, analysis, design, programming, testing, integration, deployment, we decompose the problem into features along the natural fault lines of the domain.
Decomposition vs. Abstraction
There is an alternative to the decomposition of a problem into components, and that is to abstract the problem to a higher level, rather than decompose it to a lower level. The result is the same in both cases – you have reduced the number of things you need to think about to 7 +/- 2. But when you move the problem up to a higher level of abstraction, the interaction of the parts is maintained, so abstractions would seem to be the better approach to solving complex problems.
There is a catch: people can’t create abstractions without understanding the domain, because correct abstractions require that the abstractor knows what is important, what fits together naturally, and where to find the natural joints or fault lines of the domain. For this reason, experts in the domain have a greater tendency to abstract to a higher level, while those who are not familiar with the domain tend to decompose the problem, and that decomposition will probably not be along the natural fault lines of the domain.
It is when we decompose a problem in a manner which does not fit the domain, and then drill down to the details too fast, that important things get missed. We would like to think that a lot of early detail will help us find all the hidden gremlins that might bite us later. This is only true if those gremlins are actually inside the areas we investigate – but the tough problems usually lurk between the cracks in our thinking. Unfortunately we only discover those problems when we integrate the pieces together at the end, and by that time we have invested so much in the details that change is very difficult.
It is safer to work from a higher level or abstraction, but this means we don’t drill down to detail right away. We take a breadth-first approach, keep options open and gradually fill in the details. With this approach we are more likely to find those gremlins while we still can do something about them, but some people might think that we don’t have a plan, aren’t tracking the project, and can’t manage requirements.
Assessment Rather than Certification
Quite frequently CMM is implemented as a certification process, decomposing ‘mature’ into separately verifiable capabilities. The danger is that we may lose sight of forest for the trees. For example, focusing on requirements management can be detrimental to giving users what they really want. Focusing on quality assurance separate from development can destroy the integrating effect of developers and testers working side-by-side every day. Focusing on planning as a predictive process can keep us from using planning as an organizing process.
A better approach to discovering the underlying competence of an organization is to use an assessment process rather than a certification process. Assessments present challenging situations that cannot be dealt with unless a host of capabilities are in place; if the challenge is properly handled, the presence of these capabilities can be inferred. For example, a pilot’s ability to fly a plane can be assessed by observing how the pilot lands a plane in a stiff cross wind.
Consider your hiring process. When a candidate lists Microsoft or PMI certifications, you take that into account, but you are really looking for a track record of success which demonstrates that the certifications were put to good use. One software company I know administers a one hour logic tests to job applicants, a test without a line of code in it. Yet the test does an admirable job of assessing the capability of the candidate to think like a good developer.
Measure UP
If we accept that we get what we measure then, as Rob Austin points out in “Measuring and Managing Performance in Organizations,” we do not get what we don’t measure. If we are not measuring everything, if something important just didn’t make it into our measurement system, then we are not going to get it. To counter this problem, we have a tendency to pile one measurement on top of another, every time we discover that we forgot to measure something. In CMM, for example, each KPA addresses something that caused a failure of some software project somewhere. As people discovered new ways for software projects to fail, a KPA’s was added to address the new failure mode. Nevertheless, all of those KPA’s still don’t cover everything that could go wrong, although they certainly try.
Once we admit that we just can’t measure everything, then we are ready to move to assessment-style measurements rather than a certification-style measurements. For example, in project management, we decompose the measurement system into cost, schedule, scope and defects, and we try hard to make these measurements work because we think they are the measurements we should use. But no matter how hard we try, we are often unsuccessful in measuring true business value by totaling up the yardsticks of cost, schedule, scope and defects.
What if we just measured business value instead of cost, schedule, scope, and defects? When I was developing new products at 3M, we did not pay much attention to cost, schedule and scope. Instead we developed a P&L which we would use to check out the impact of a late introduction date or a lower unit cost. The development team tried to optimize the overall P&L, not any one dimension. It may seem strange that a project team would concern itself with the financial model of the business, but I assure you it is far better decision-support tool than cost, schedule and scope.
The Measure of Maturity
I believe that our industry could use a simpler measurement of software development maturity, one that is not subject to the dangers of decomposition and does not attempt to defy Miller’s Law with a long lists of things to think about. I propose that we use a measurement that has been successfully used in countless organizations, and has proven to be a true indicator of the presence of the kind of capabilities measured by CMM. To introduce this measurement, let’s go back to Dell and see what it measured:
- The level of inventory throughout the entire system, and
- The speed with which the organization can repeatedly and reliably respond to a customer request.
True, I counter, but these measurements still work.
The level of inventory in your system is the amount of stuff you have under development. The more inventory of unfinished development work you have sitting around, the more you are at risk of it growing obsolete, getting lost, and hiding defects. If you capitalized it, you also bear the risk of having to write it off if it doesn’t work. The less of that kind of stuff you have on hand, the better off you will be.
The speed with which you can responds to a customer is directly proportional to the amount of unfinished development work you have clogging up your system. In truth, the two measurements above are one and the same – you can deliver faster if your system is not clogged with unfinished work. But more to the point, you cannot reliably and repeatedly deliver fast if you do not have a mature organization. You need version control, built-in quality, ways to gather requirements fast and routinely translate them correctly into code. You need everything you measure in CMM, and you need all of those capabilities working together as an integrated whole. In short, the speed with which you can repeatedly and reliably deliver on customers requests is a better measure of the maturity of your software development organization.
Screen Beans Art, © A Bit Better Corporation
Subscribe to:
Posts (Atom)