Thursday, 21 January 2016

Five World-Changing Software Innovations

On the 15th anniversary of the Agile Manifesto, let's look at what else was happening while we were focused on spreading the Manifestos ideals. There have been some impressive advances in software technology since Y2K:
          1.   The Cloud
          2.   Big Data
          3.   Antifragile Systems
          4.   Content Platforms
          5.   Mobile Apps 

The Cloud

In 2003 Nicholas Carr’s controversial article “IT Doesn’t Matter” was published in Harvard Business Review. He claimed that “the core functions of IT– data storage, data processing, and data transport” had become commodities, just like electricity, and they no longer provided differentiation. It’s amazing how right – and how wrong – that article turned out to be. At the time, perhaps 70% of an IT budget was allocated to infrastructure, and that infrastructure rarely offered a competitive advantage. On the other hand, since there was nowhere to purchase IT infrastructure as if it were electricity, there was a huge competitive advantage awaiting the company that figured out how package and sell such infrastructure. 

At the time, IT infrastructure was a big problem – especially for rapidly growing companies like Amazon.com. Amazon had started out with the standard enterprise architecture: a big front end coupled to a big back end. But the company was growing much faster than this architecture could support. CEO Jeff Bezos believed that the only way to scale to the level he had in mind was to create small autonomous teams. Thus by 2003, Amazon had restructured its digital organization into small (two-pizza) teams, each with end-to-end responsibility for a service. Individual teams were responsible for their own data, code, infrastructure, reliability, and customer satisfaction.

Amazon’s infrastructure was not set up to deal with the constant demands of multiple small teams, so things got chaotic for the operations department. This led Chris Pinkham, head of Amazon’s global infrastructure, to propose developing a capability that would let teams manage their own infrastructure – a capability that might eventually be sold to outside companies. As the proposal was being considered, Pinkham decided to return to South Africa where he had gone to school, so in 2004 Amazon gave him the funding to hire a team in South Africa and work on his idea. By 2006 the team’s product, Elastic Compute Cloud (EC2), was ready for release. It formed the kernel of what would become Amazon Web Services (AWS), which has since grown into a multi-billion-dollar business.

Amazon has consistently added software services on top of the hardware infrastructure – services like databases, analytics, access control, content delivery, containers, data streaming, and many others. It’s sort of like an IT department in a box, where almost everything you might need is readily available. Of course Amazon isn’t the only cloud company – it has several competitors.

So back to Carr’s article – Does IT matter?  Clearly the portion of a company’s IT that could be provided by AWS or similar cloud services does not provide differentiation, so from a competitive perspective, it doesn’t matter. If a company can’t provide infrastructure that matches the capability, cost, accessibility, reliability, and scalability of the cloud, then it may as well outsource its infrastructure to the cloud.

Outsourcing used to be considered a good cost reduction strategy, but often there was no clear distinction between undifferentiated context (that didn’t matter) and core competencies (that did). So companies frequently outsourced the wrong things – critical capabilities that nurtured innovation and provided competitive advantage. Today it is easier to tell the difference between core and context: if a cloud service provides it then anybody can buy it, so it’s probably context; what’s left is all that's available to provide differentiation. In fact, one reason why “outsourcing” as we once knew it has fallen into disfavor is that today, much of the outsourcing is handled by cloud providers. 

The idea that infrastructure is context and the rest is core helps explain why internet companies do not have IT departments. For the last two decades, technology startups have chosen to divide their businesses along core and infrastructure lines rather than along technology lines. They put differentiating capabilities in the line business units rather than relegating them to cost centers, which generally works a lot better. In fact, many IT organizations might work better if they were split into two sections, one (infrastructure) treated as a commodity and the rest moved into (or changed into) a line organization. 


Big Data

In 2001 Doug Cutting released Lucene, a text indexing and search program, under the Apache software license. Cutting and Mike Cafarella then wrote a web crawler called Nutch to collect interesting data for Lucerne to index. But now they had a problem – the web crawler could index 100 million pages before it filled up the terabyte of data they could easily fit on one machine. At the time, managing large amounts of data across multiple machines was not a solved problem; most large enterprises stored their critical data in a single database running on a very large computer. 

But the web was growing exponentially, and when companies like Google and Yahoo set out to collect all of the information available on the web, currently available computers and databases were not even close to big enough to store and analyze all of that data. So they had to solve the problem of using multiple machines for data storage and analysis. 

One of the bigger problems with using multiple machines is the increased probability that one of machines will fail. Early in its history, Google decided to accept the fact that at its scale, hardware failure was inevitable, so it should be managed rather than avoided. This was accomplished by software which monitored each computer and disk drive in a data center, detected failure, kicked the failed component out of the system, and replaced it with a new component. This process required keeping multiple copies of all data, so when hardware failed the data it held was available in another location. Since recovering from a big failure carried more risk than recovering from a small failure, the data centers were stocked with inexpensive PC components that would experience many small failures. The software needed to detect and quickly recover from these “normal” hardware failures was perfected as the company grew. 

In 2003 Google employees published two seminal papers describing how the company dealt with the massive amounts of data it collected and managed. Web Search for a Planet: The Google Cluster Architecture by Luiz André Barroso, Jeffrey Dean, and Urs Hölzle described how Google managed it’s data centers with their inexpensive components. The Google File System by Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung described how the data was managed by dividing it into small chunks and maintaining multiple copies (typically three) of each chunk across the hardware. I remember that my reaction to these papers was “So that’s how they do it!” And I admired Google for sharing these sophisticated technical insights. 

Cutting and Cafarella had approximately the same reaction. Using the Google File System as a model, they spent 2004 working on a distributed file system for Nutch. The system abstracted a cluster of storage into a single file system running on commodity hardware, used relaxed consistency, and hid the complexity of load balancing and failure recovery from users. 

In fall, 2004, the next piece of the puzzle – analyzing massive amounts of stored data – was addressed by another Google paper: MapReduce: Simplified Data Processing on Large Clusters by Jeffrey Dean and Sanjay Ghemawat. Cutting and Cafarella spent 2005 rewriting Nutch and adding MapReduce, which they released as Apache Hadoop in 2006. At the same time, Yahoo decided it needed to develop something like MapReduce, and settled on hiring Cutting and building Apache Hadoop into software that could handle its massive scale. Over the next couple of years, Yahoo devoted a lot of effort to converting Apache Hadoop – open source software – from a system that could handle a few servers to a system capable of dealing with web-scale databases. In the process, their data scientists and business people discovered that Hadoop was as useful for business analysis as it was for web search. 

By 2008, most web scale companies in Silicon Valley – Twitter, Facebook, LinkedIn, etc. – were using Apache Hadoop and contributing their improvements. Then startups like Cloudera were founded to help enterprises use Hadoop to analyze their data. What made Hadoop so attractive? Until that time, useful data had to be structured in a relational database and stored on one computer. Space was limited, so you only kept the current value of any data element. Hadoop could take unlimited quantities of unstructured data stored on multiple servers and make it available for data scientists and software programs to analyze. It was like moving from a small village to a megalopolis – Hadoop opened up a vast array of possibilities that are just beginning to be explored.

In 2011 Yahoo found that its Hadoop engineers were being courted by the emerging Big Data companies, so it spun off Hortonworks to give the Hadoop engineering team their own Big Data startup to grow. By 2012, Apache Hadoop (still open source) had so many data processing appendages built on top of the core software that MapReduce was split off from the underlying distributed file system. The cluster resource management that used to be in MapReduce was replaced by YARN (Yet Another Resource Negotiator). This gave Apache Hadoop another growth spurt, as MapReduce joined a growing number of analytical capabilities that run on top of YARN. Apache Spark is one of those analytical layers which supports data analysis tools that are more sophisticated and easier to use than MapReduce. Machine learning and analytics on data streams are just two of the many capabilities that Spark offers – and there are certainly more Hadoop tools to come. The potential of Big Data is just beginning to be tapped. 

In the early 1990’s Tim Burners Lee worked to ensure that CERN made his underlying code for HTML, HTTP and URL’s available on a royalty free basis, and because of that we have the world wide web. Ever since, software engineers have understood that the most influential technical advances come from sharing ideas across organizations, allowing the best minds in the industry to come together and solve tough technical problems. Big Data is as capable as it is because Google and Yahoo and many others companies were willing to share their technical breakthroughs rather than keep them proprietary. In the software industry we understand that we do far better as individual companies when the industry as a whole experiences major technical advances. 


Antifragile Systems

It used to be considered unavoidable that as software systems grew in age and complexity, they became increasingly fragile. Every new release was accompanied by fear of unintended consequences, which triggered extensive testing and longer periods between releases. However, the “failure is not an option” approach is not viable at internet scale – because things will go wrong in any very large system. Ignoring the possibility of failure – and focusing on trying to prevent it – simply makes the system fragile. When the inevitable failure occurs, a fragile system is likely to break down catastrophically.[1]  

Rather than prevent failure, it is much more important to identify and contain failure, then recover with a minimum of inconvenience for consumers. Every large internet company has figured this out. Amazon, Google, Esty, Facebook, Netflix and many others have written or spoken about their approach to failure. Each of these companies has devoted a lot of effort to creating robust systems that can deal gracefully with unexpected and unpredictable situations.

Perhaps the most striking among these is Netflix, which has a good number of reliability engineers despite the fact that it has no data centers. Netflix’s approach was described in 2013 by Ariel Tseitlin in the article The Antifragile Organization: Embracing Failure to Improve Resilience and Maximize Availability.  The main way Netflix increases the resilience of its systems is by regularly inducing failure with a “Simian Army” of monkeys: Chaos Monkey does some damage twice an hour, Latency Monkey simulates instances that are sick but still working, Conformity Monkey shuts down instances that don’t adhere to best practices, Security Monkey looks for security holes, Janitor Monkey cleans up clutter, Chaos Gorilla simulates failure of an AWS availability zone and Chaos Kong might take a whole Amazon region off line. I was not surprised to hear that during a recent failure of an Amazon region, Netflix customers experienced very little disruption.

A Simian Army isn’t the only way to induce failure. Facebook’s motto “Move Fast and Break Things” is another approach to stressing a system. In 2015, Ben Maurer of Facebook published Fail at Scale – a good summary of how internet companies keep very large systems reliable despite failure induced by constant change, traffic surges, and hardware failures. 

Maurer notes that the primary goal for very large systems is not to prevent failure – this is both impossible and dangerous. The objective is to find the pathologies that amplify failure and keep them from occurring. Facebook has identified three failure-amplifying pathologies: 

1. Rapidly deployed configuration changes
Human error is amplified by rapid changes, but rather than decrease the number of deployments, companies with antifragile systems move small changes through a release pipeline. Here changes are checked for known errors and run in a limited environment. The system quickly reverts to a known good configuration if (when) problems are found. Because the changes are small and gradually introduced into the overall system under constant surveillance, catastrophic failures are unlikely. In fact, the pipeline increases the robustness of the system over time.

2. Hard dependencies on core services
Core services fail just like anything else, so code has to be written with that in mind. Generally hardened API’s that include best practices are used to invoke these services. Core services and their API’s are gradually improved by intentionally injecting failure into a core service to expose weaknesses that are then corrected as failure modes are identified.

3. Increased latency and resource exhaustion
Best practices for avoiding the well-known problem of resource exhaustion include managing server queues wisely and having clients track outstanding requests. It’s not that these strategies are unknown, it’s that they must become common practice for all software engineers in the organization. 

Well-designed dashboards, effective incident response, and after-action reviews that implement countermeasures to prevent re-occurrence round out Facebook's toolkit for keeping its very large systems reliable.

We now know that fault tolerant systems are not only more robust, but also less risky than systems which we attempt to make failure-free. Therefore, common practice for assuring the reliability of large-scale software systems is moving toward software-managed release pipelines which orchestrate frequent small releases, in conjunction with failure induction and incident analysis to produce hardened infrastructure.


Content Platforms

Video is not new; television has been around for a long time, film for even longer. As revolutionary as film and TV have been, they push content to a mass audience; they do not inspire engagement. An early attempt at visual engagement was the PicturePhone of the 1970’s – a textbook example of a technical success and a commercial disaster. They got the PicturePhone use case wrong – not many people really wanted to be seen during a phone call. Videoconferencing did not fare much better – because few people understood that video is not about improving communication, it’s about sharing experience. 

In 2005, amidst a perfect storm of increasing bandwidth, decreasing cost of storage, and emerging video standards, three entrepreneurs – Chad Hurley, Steve Chen, and Jawed Karim – tried out an interesting use case for video: a dating site. But they couldn’t get anyone to submit “dating videos,” so they accepted any videos clips people wanted to upload. They were surprised at the videos they got: interesting experiences, impressive skills, how-to lessons – not what they expected, but at least it was something. The YouTube founders quickly added a search capability. This time they got the use case right and the rest is history. Video is the printing press of experience, and YouTube became the distributor of experience. Today, if you want to learn the latest unicycle tricks or how to get the back seat out of your car, you can find it on YouTube. 

YouTube was not the first successful content platform. Blogs date back to the late 1990’s where they began as diaries on personal web sites shared with friends and family. Then media companies began posting breaking news on their web sites to get their stories out before their competitors. Blogger, one of the earliest blog platforms, was launched just before Y2K and acquired by Google in 2003 – the same year WordPress was launched. As blogging popularity grew over the next few years, the use case shifted from diaries and news articles to ideas and opinions – and blogs increasingly resembled magazine articles. Those short diary entries meant for friends were more like scrapbooks; they came to be called tumbleblogs or microblogs. And – no surprise – separate platforms for these microblogs emerged: Tumblr in 2006 and Twitter in 2007.

One reason why blogs drifted away from diaries and scrapbooks is that alternative platforms emerged aimed at a very similar use case – which came to be called social networking. MySpace was launched in 2003 and became wildly popular over the next few years, only to be overtaken by Facebook, which was launched in 2004. 

Many other public content platforms have come (and gone) over the last decade; after all, a successful platform can usually be turned into a significant revenue stream. But the lessons learned by the founders of those early content platforms remain best practices for two-sided platforms today:

  1. Get the use case right on both sides of the platform. Very few founders got both use cases exactly right to begin with, but the successful ones learned fast and adapted quickly. 
  2. Attract a critical mass to both sides of the platform. Attracting enough traffic to generate network effects requires a dead simple contributor experience and an addictive consumer experience, plus a receptive audience for the initial release.
  3. Take responsibility for content even if you don’t own it. In 2007 YouTube developed ContentID to identify copyrighted audio clips embedded in videos and make it easy for contributors to comply with attribution and licensing requirements. 
  4. Be prepared for and deal effectively with stress. Some of the best antifragile patterns came from platform providers coping with extreme stress such as the massive traffic spikes at Twitter during natural disasters or hectic political events.

In short, successful platforms require insight, flexibility, discipline, and a lot of luck. Of course, this is the formula for most innovation. But don't forget  no matter how good your process is, you still need the luck part. 


Mobile Apps

It’s hard to imagine what life was like without mobile apps, but they did not exist a mere eight years ago. In 2008 both Apple and Google released content platforms that allowed developers to get apps directly into the hands of smart phone owners with very little investment and few intermediaries. By 2014 (give or take a year, depending on whose data you look at) mobile apps had surpassed desktops as the path people take to the internet. It is impossible to ignore the importance of the platforms that make mobile apps possible, or the importance of the paradigm shift those apps have brought about in software engineering. 

Mobile apps tend to be small and focused on doing one thing well – after all, a consumer has to quickly understand what the app does. By and large, mobile apps do not communicate with each other, and when they do it is through a disciplined exchange mediated by the platform. Their relatively small size and isolation make it natural for each individual app to be owned by a single, relatively small team that accepts the responsibility for its success. As we saw earlier, Amazon moved to small autonomous teams a long time ago, but it took a significant architectural shift for those teams to be effective. Mobile apps provide a critical architectural shift that makes small independent teams practical, even in monolithic organizations. And they provide an ecosystem that allows small startups to compete effectively with those organizations.  

The nature of mobile apps changes the software development paradigm in other ways as well. As one bank manager told me, “We did our first mobile app as a project, so we thought that when the app was released, it was done. But every time there was an operating system update, we had to update the app. That was a surprise! There are so many phones to test and new features coming out that our apps are in a constant state of development. There is no such thing as maintenance – or maybe it's all maintenance.”

The small teams, constant updates, and direct access to the deployed app have created a new dynamic in the IT world: software engineers have an immediate connection with the results of their work. App teams can track usage, observe failures and track metrics – then make changes accordingly. More than any other technology, mobile platforms have fostered the growth of small, independent product teams – with end-to-end responsibility  that use short feedback loops to constantly improve their offering. 

Let’s return to luck. If you have a large innovation effort, it probably has a 20% chance of success at best. If you have five small, separate innovation efforts, each with 20% chance of success, you have a much better chance that one of them will succeed – as long as they are truly autonomous and are not tied to an inflexible back end or flawed use case. Mobile apps create an environment where it can be both practical and advisable to break products into small, independent experiments, each owned by its own “full stack” team.[2] The more of these teams you have pursuing interesting ideas, the more likely you are that some of the ideas will become the innovative offerings that propel your company into the future. 


What about “Agile”?

You might notice that “Agile” is not on my list of innovations. And yet, agile values are found in every major software innovation since the Agile Manifesto was articulated in 2001. Agile development does not cause innovation; it is meant to create the conditions necessary for innovation: flexibility and discipline, customer understanding and rapid feedback, small teams with end-to-end responsibility. Agile processes do not manufacture insight and they do not create luck. That is what people do.  
____________________________
Footnotes:
1.    “the problem with artificially suppressed volatility is not just that the system tends to become extremely fragile; it is that, at the same time, it exhibits no visible risks… Such environments eventually experience massive blowups… catching everyone off guard and undoing years of stability or, in almost all cases, ending up far worse than they were in their initial volatile state. Indeed, the longer it takes for the blowup to occur, the worse the resulting harm…”  Antifragile, Nassim Taleb p 106

2.   A full stack team contains all the people necessary to make things happen in not only the full technology stack, but also in the full stack of business capabilities necessary for the team to be successful.