Software Delivery (8)


Basecamp Shape Up Product Development Summary

Ryan Singer has documented Basecamp product development and delivery methodology in the ebook Shape Up – Stop Running in Circles and Ship Work that Matters. Shape Up describes Basecamps process of taking raw ideas, working through a shaping process to narrow to a core problem, remove any unknowns / risks / deep rabbit holes, add project boundaries, prefer appetite over estimates, create a pitch, bet on the work with a six week build cycle, and handover the work to a small, empowered build team to discover the work through doing, building scopes of work, communicating progress through Hill Charts, use scope hammering and working in small vertical slices in a continuous delivery mindset, attacking the most unknown / riskiest work early in the six week product development cycle.

Key Messages:

  • Use a shaping process to define & validate a problem, to address any unknowns or risks
  • Focusing on appetite instead of estimates
  • Prefer bets, not backlogs
  • Bet on a six week cycle with a circuit breaker at the end
  • Small empowered teams owning cycle outcomes
  • Deliver small vertical slices of the problem space
  • Build has two distinct phases – discover and validate (uphill), and known / execute (downhill)
  • Scope will grow as the delivery team learns more about the space, continuously hammer scope to deliver on six week commitment / cycle.

Please see below my notes / snippets (copied from Shape Up ebook without permission), you can download a free copy of Shape Up from basecamp: https://basecamp.com/shapeup/shape-up.pdf.

Notes

Six week cycles

  • Strict time box, acts as a circuit breaker, by default no extension.

Shaping the work

  • Senior group works in parallel to cycle team – focused on appetite (how much time do we want to spend)

Team fully responsible

  • Making hard calls to finish the project (cycle) on time

Targeting risk

  • The risk of not shipping on time. Solving open questions before we commit to a cycle. Build vertical deliverables, integrate early & continuously, sequence most unknown work first.

Part 1 Shaping

Wireframes are too concrete (give designers no room / creativity – design anchored)

Words are too abstract (solving a problem with no context, hard to understand scope & make tradeoffs)

Good Shaping:

  • Its rough
  • Its solved
  • Its bounded

Shaping is kept private (closed door) until commitment to a cycle is made

Two work tracks (cycles)

  • -> One for shaping
  • -> One for building

6 week cycles, shaping leads into building:

Shaping 1 | Shaping 2
————-> Building 1 | Shaping 3
————————–>Building 2 | Shaping 4
—————————————> | Building 3
—————————————————->  | Building 4

 

Appetite

  • Small batch (one designer, one or two programmers for one or two weeks)
  • Big batch (same team size, 6 weeks)

Fixed time, variable scope: An appetite is completely different from an estimate. Estimates start with a design and end with a number. Appetites start with a number and end with a design

Analyse a customer problem – we asked when she wanted a calendar. What was she doing when the thought occurred to ask for it?

Breadboarding

Use words, or fat-marker visuals to achieve.

Iterate on original idea

Fat marker visuals

  • Avoid getting into a high level of fidelity, or into the weeds

Do stress-testing and de-risking (find deep holes and challenges which could hinder)

The Pitch

Prefer pitches to be asynchronous communication – i.e. give people time to review offline in their own time, only escalate to real-time when necessary (i.e. meeting with key stakeholders) and give notice in advance

People review pitch and add comments (i.e to poke holes / ask questions – not to say no to the pitch, that’s for the betting table)

Part 2 Betting

Bets, not backlogs – big backlogs are a big weight we don’t need to carry. Backlogs are big time wasters – constantly reviewing, grooming and organising.Each 6 week cycle, a betting table is held where stakeholders decide what to do in the next cycle – containing a few well shaped, risk reduced options; the pitches are potential bets

  • If a pitch was great, but the time wasn’t right (there is no backlog), individuals can track the pitch independently and lobby for it again six weeks later

Its easy to overvalue ideas – in truth ideas are cheap – don’t stockpile or backlog. Really important ideas will come back to you.

6 Week Cycle

Cool Down

After every 6 week cycle, we schedule two weeks for cool down. This gives leaders enough time to breath, meet and consider what to do next and programmers and designers are free to work on whatever they want (i.e. fix bugs, explore new ideas, try out new technical possibilities).

Project teams consist of one designer & two programmers or one designer & one programmer (normally). A team spending an entire 6 week cycle is called the big batch team, and the team working on small projects (1-2 weeks) is called the small batch team.

The output of the betting meeting is called a cycle plan.

The cycle plan is a bet with a potential payout at the end of the cycle.

Cycles are dedicated commitments – uninterrupted time for the team to focus. The team cant be pulled away to work on something else. When you make a bet, you honour it.

  • “When you pull someone away for one day to fix a big or help a different team, you dont just lose a day. You lose the momentum they built up and the time it will take to gain it back. Losing the wrong. Hour can kill a day. Losing a day can kill a week.”

We only plan one cycle ahead, and can always introduce critical work in the next cycle. If it’s a real crisis, we can always hit the breaks – by true crises are very rare.

Having a fixed 6 week cycle without any potential for increased time acts as a circuit breaker, preventing runaway projects and projects which overload the system. If a project doesn’t finish in the six weeks, normally means a problem occurred in the shaping phase – perhaps time to reframe the problem. “A hard deadline and the chance of not shipping motivates the team to regularly question how their designs and implementation decisions are affecting scope”

What about bugs: Unless its a P1/P2 (i.e. a crises) they don’t naturally get priority over existing planned work, they can wait. This is how we address bugs:

  1. Use cool-down period
  2. Bring it to the betting table
  3. Schedule a bug smash (once a year, usually around holidays – a whole dedicated cycle to fixing bugs)

For projects larger than a 6 week cycle, we shape them (break them down) into 6 week cycle and only bet 6 weeks at a time.

Place Your Bets

  • Depending on whether we’re improving an existing product or building a new product, were going to set different expectations about what happens during the six week cycle.
    • Existing Products – Shape the work, bet on it, build.
    • New Products – broken into three phases:
      • 1. R&D mode: Learn what we want by building it (time boxed spikes, learn by doing), no expectation of shipping anything.
      • 2. Production mode: Shape the work, bet on it, build. Shipping is the goal (merging to main codebase), however not necessarily to end customers yet so we maintain the option to remove features from the final cut before launch
      • 3. Cleanup mode: A free for all, reserved capacity to finish things, or address things forgotten, bugs etc, no shaping, no clear team boundaries with work shipped continuously in as small bites as possible. Leadership make “final cut” decisions with cleanup not lasting longer than two cycles.

Examples

Betting table questions & debates

  • Does the problem matter?
  • Weighing up problems (options) against each other
  • Can we narrow down the problem (Pareto – 80% of the benefit from 20% of the change)
  • Is the appetite right (do we want to spend $xxx / weeks / cycles on this problem)?
  • Is the solution attractive?
  • Is it the right time?
  • Are the right people available?

After the betting table has finished, a kick-off message is posted on which projects we’re betting for the next cycle and who will be working on them

Part 3 Building

Assign projects, not tasks. Nobody plays the role of “taskmaster” or the “architect” who splits the project up into pieces for other people to execute.

Team defines their own tasks and work within the boundaries of the pitch.

Team have full autonomy and can use their judgement to execute.

Done means deployed. All QA needs to happen within the cycle.

A Basecamp project is created, chat channel and kickoff call.

First two – three days is radio silence from the team as they dive deep into the details and get aquatinted with the problem.

Team starts of with an imagined tasks, and through discovery learn about the real tasks to complete. Teams discover by doing the real work.

Integrate one slice

Pick a small slice of the project (ie design, backend & front end coding) to deliver end to end to show progress and gain feedback

Start in the middle

Start at the core of the problem (ie core screen and adding data to a database) and stub everything else out, rather than at the entry point (i.e. logging in). When choosing what to build first:

  • It should be core
  • It should be small
  • It should be novel (things you’ve never done before, address risk / uncertainty)

Organise by structure, not by person

Allow teams to self organise around a problem, understand the scope, form a mental image, breaking down into parts that are no longer than 1-2 days effort – a series of mini scopes.

Scopes become the natural language of the project at the macro level. It will help communicate status and progress.

Scoping happens over time as the team learns (not necessarily all up front); You need to walk the territory before you can draw the map. Scopes need to be discovered by doing the real work; identifying imagined vs discovered tasks and seeing how things connect (and don’t connect).

How do you know if you have scoped right?

Usually takes a few weeks to get a clear understanding of scope

A typical software project is split into cake layers (front end & backend work, thin slices). Watch out for icebergs, which can see a lot more back end or a lot more front-end work; look to simplify, reduce the problem and/or split these into seperate projects.

There will always be things that don’t fit into specific scope buckets, we refer to these tasks a chowder.

Mark tasks which are nice to have with a leading ~ to identify nice-to-haves (to sort out from must-haves).

Show Progress

We have to be cautious if big up front plans – imagined tasks (in theory) vs. real tasks (in practice).

As the project progress, to-do lists actually grows as the team makes progress (making it very hard to report progress of an imagined up front plan).

The problem with estimates is they don’t show uncertainty (or confidence level).

  • If you have two tasks, both estimated to take four hours:
    • the team has done task 1 many times and you can be confident in the estimate
    • The team has never done task 2 or it has unclear interdependencies (lots of unknowns) is uncertain and a low confidence estimate

We came up with a way to see the status of a project without counting tasks and without numerical estimates – by shifting the focus from what’s done or not done to what’s unknown and what’s solved. We use the metaphor of the hill.

The uphill phase is full of uncertainty, unknowns and problem solving (ie discovery). The downhill phase is marked by certainty, confidence, seeing everything and knowing what to do.

We can combine the hill metaphor with the scopes to plot each one as a different colour on the hill.

A dot that doesn’t move over time is a red-flag, someone might be stuck and need help (the Hill Chart identifies this without someone needing to say “I dont know / I need help”). Changes languages and enables managers to help by asking “what can we solve to get that over the hill?”. A non-moving dot can also indicate work is progressing well, but scope is significantly increasing with discovery, the team can break scope apart into smaller scope or redefine / reshape the problem.

Sometimes tasks backslide, which often happens when someone did the uphill work (i.e. discovery) with their head (i.e. imagined) instead of their hands (practice). Uphill can be broken into three tasks:

  1. “I’ve thought about this”
  2. “Ive validated my approach”
  3. “I’m far enough with what I’ve build that I don’t believe there are other unknowns”

Teams should attack the scariest / riskiest scope first within the cycle (given more time to unknown tasks and less time to known tasks).

Journalists have a concept called the inverted pyramid, their articles start with the most essential information at the top and add details and background information in decreasing order of importance. Teams should plan their work this way too.

Deciding when to stop

There’s always more work than time. Shipping on time means shipping something that’s imperfect.

Pride in work is important for quality and morale, but we need to direct it at the right target. If we aim for perfect design we’l never get there, at the same time we dont want to lower our standard.

Instead of comparing up to an ideal, compare down to a baseline – seeing work as being better than what customers have today – “its not perfect, but it works and is an improvement”.

Limits motivate trade-offs, with a hard six week circuit breaker forces teams to make trade-offs.

Cutting scope isn’t lowering quality. Makes choices makes the product better, it differentiates the product (better at some things).

Scope Hammering

Quality Assurance

Base camp (for millions of customers), have one QA person. The designers and programmers take responsibility for the basic quality of their work and the QA person comes in towards the end of the cycle and hunts for edge cases outside the core functionality. Programmers write their own tests and team works together to ensure the project does what it should according to what was shaped.

We think of QA as a level up, not as a gate or check point.

The team can ship without waiting for a code-review, there’s no formal checkpoint. But code review makes things better, so if there’s time, it make sense.

When to extend a project

In rare cases we’ll allow a project to run past its deadline / cycle and use the following criteria:

  • The outstanding tasks must be “must haves”
  • The outstanding tasks must be all “down hill” – no unsolved problems, no open questions.

The cool down period can be used to finish a project, however team needs to be disciplined and this shouldn’t become a habit (points to a problem with shaping or team performance).

Move On

Shipping and going live can generate new work – through customer feedback, defects and new feature requests.

With customer feedback, treat as new raw ideas which need to go through the shaping process.

  • Use a gentle no (push back) with customers until ideas are shaped and problem verified. If you say yes to customer requests, it can take away your freedom in the future (like taking on debt).

Feedback needs to be shaped.

Summary

As base camp has scaled to 50 people, we’ve been able to specialise roles:

  • Product team (12)
  • Specialised roles of designers and programmers
  • Dedicated Security, Infrastructure & Performance (SIP) handles technical work, lower in stack and more structural
  • Ops team (keep the lights on)
  • Support team

With this structure, we don’t need to interrupt the designers and programmers on our core product team working on shaped projects within cycles.




Software Delivery Estimate Guideline

I have used slightly differing versions of the below to outline what should be included in an estimate, please consider each business environment, team and delivery process will have differing contexts (i.e know your context – KYC). For an overview of different estimation methods and templates please my Software Development & Delivery Estimation article. Depending on the phase of the initiative / project (pre-discovery, discovery, delivery), delivery methodology and type of work at hand (ie small agile feature, initiative, or large project) will determine which of the below estimate method to apply.

Generally there are two types of estimates:

  • High-Level – i.e very early on, not much context, quick sizing based on little information
  • Detailed – i.e close to delivery, team involved, more detail, agile point based estimation

Our delivery estimates consider a person day to be 8 hours, however when scheduling work (assuming working outside a flow based agile delivery model here) we should consider a person day of 6-7 hours (not 8) which should cover non-delivery work such as meetings, lunch, learning, training, discovery, estimation, production support, backlog grooming, story breakdown sessions, unplanned leave etc. If working in an agile method, velocity based planning will naturally take care of these items.

What should be included in an estimate

  • Analysis effort
  • Technical design effort
  • Development effort
  • Unit testing
  • Manual testing of solution
  • Test automation (including functional, integration/API, E2E and smoke tests)
  • Refactoring tasks (if possible / agreed)
  • Story kickoffs, code/peer reviews and walkthroughs
  • Build pipelines, environment setup & configuration and deployment infrastructure
  • Defect fixing in system testing, end to end testing & user acceptance testing phases
  • Deployments through lower environments to production
  • Feature toggling / rollout strategy in production and support
  • Monitoring & alerting tasks
  • Production defect fixing
  • Documentation

What should not be included in an estimate

  • Buffer, fat or contingency (this will be added at an initiative / project level when putting together estimates to share and in delivery plan). We want to avoid layers of buffer/contingency at task, feature, initiative and program level etc.
  • Spikes / prototypes / proof of concepts – these should be estimated / played as a seperate time-boxed task (ideally to inform your estimation)
  • Formal UAT Support – scheduled as a seperate task in delivery plan on large projects
  • Formal Warranty period / Hyper-care – scheduled as a seperate task in delivery plan on large projects
  • Customer meetings
  • General development activities outside this feature
  • Learning time, training, guilds, conferences etc



Lean Enterprise – How High Performance Organisations Innovate at Scale Notes

Jez Humble, Joanne Moleksy & Barry O’Reilly have teamed up to deliver an excellent book on applying lean and agile practices to enterprise business. The book focus on how to maximise product discovery, product development, validated learning through experimentation, prioritisation through cost of delay, lean governance principles and modern funding practices to maximise value delivered in the shortest time. The book provides case studies on how traditional enterprise practices such as architecture, project management office (PMO), change management, security and operations can apply similar lean product development methods to maximise value creation. It provides and overview on modern software practices such as continuous delivery, test automation, experimentation and flow based value creation.

Please see below book notes / snippets (copied from my Kindle highlights with some minor edits, without permission). I highly recommend buying the book & keeping as your lean, agile bible on how to continuously learn and get things done.

Lean Enterprise Book


On Running a Lean Business

Part I. Orient

The business world is moving from treating IT as a utility that improves internal operations to using rapid software- and technology-powered innovation cycles as a competitive advantage. Shareholder value is the dumbest idea in the world…[it is] a result, not a strategy…Your main constituencies are your employees, your customers, and your products. Research has shown that focusing only on maximising profits has the paradoxical effect of reducing the rate of return on investment. Rather, organisations succeed in the long term through developing their capacity to innovate and adopting the strategy articulated by Jack Welch in the above epigraph: focusing on employees, customers, and products.

The Toyota Production System (TPS) makes building quality into products the highest priority, so a problem must be fixed as soon as possible after it’s discovered, and the system must then be improved to try and prevent that from happening again. TPS introduced the famous andon cord process. In contrast, the heart of the TPS is creating a high-trust culture in which everybody is aligned in their goal of building a high-quality product on demand and where workers and managers collaborate across functions to constantly improve — and sometimes radically redesign — the system. These ideas from the TPS — a high-trust culture focused on continuous improvement (kaizen), powered by alignment and autonomy at all levels — are essential to building a large organisation that can adapt rapidly to changing conditions. The TPS, instead, requires workers to pursue mastery through continuous improvement, imbues them with a higher purpose — the pursuit of ever-higher levels of quality, value, and customer service — and provides a level of autonomy by empowering them to experiment with improvement ideas and to implement those that are successful.

Giving people pride in their work rather than trying to motivate them with carrots and sticks is an essential element of a high-performance culture.

The TPS does away with the concept of seniority in which union workers are assigned jobs based on how many years of service they have, with the best jobs going to the most senior. Under the TPS, everybody has to learn all the jobs required of their team and rotate through them. Toyota has always been very open about what it is doing, giving public tours of its plants, even to competitors — partly because it knows that what makes the TPS work is not so much any particular practices but the culture. Many people focus on the practices and tools popularised by the TPS, such as the andon cords. One GM vice president even ordered one of his managers to take pictures of every inch of the NUMMI plant so they could copy it precisely. The result was a factory with andon cords but with nobody pulling them because managers (following the principle of extrinsic motivation) were incentivised by the rate at which automobiles — of any quality — came off the line.

The key to understanding a lean enterprise is that it is primarily a human system.

  • Pathological organisations are characterised by large amounts of fear and threat. People often hoard information or withhold it for political reasons, or distort it to make themselves look better.
  • Bureaucratic organisations protect departments. Those in the department want to maintain their “turf,” insist on their own rules, and generally do things by the book — their book.
  • Generative organisations focus on the mission. How do we accomplish our goal? Everything is subordinated to good performance, to doing what we are supposed to do.
How organisations process information

Figure 1. How organisations process information

Analysis showed that firms with high-performing IT organisations were twice as likely to exceed their profitability, market share, and productivity goals.

The survey also set out to examine the cultural factors that influenced organisational performance. The most important of these turned out to be whether people were satisfied with their jobs, based on the extent to which they agreed with the following statements (which are strongly reminiscent of the reaction of the NUMMI workers who were introduced to the Toyota Production System):

  • I would recommend this organisation as a good place to work.
  • I have the tools and resources to do my job well.
  • I am satisfied with my job.
  • My job makes good use of my skills and abilities.

Statistical analysis of the results showed that team culture was not only strongly correlated with organisational performance, it was also a strong predictor of job satisfaction. The results are clear: a high-trust, generative culture is not only important for creating a safe working environment — it is the foundation of creating a high-performance organisation.

Mission Command as an alternative to Command & Control

Command and control: the idea from scientific management that people in charge make the plans and the people on the group execute them is an outdated model, highlighted in the defeat of the Prussian Army in 1806 my Napoleon. Scharnhorst noted that Napoleon’s officers had the authority to make decisions as the situation on the ground changed, without waiting for approval through the chain of command. This allowed them to adapt rapidly to changing circumstances. “No plan survives contact with the enemy”,  instead, he has this advice: “The higher the level of command, the shorter and more general the orders should be”. Crucially, orders always include a passage which describes their intent, communicating the purpose of the orders. This allows subordinates to make good decisions in the face of emerging opportunities or obstacles which prevent them from following the original orders exactly (called Auftragstaktik or Mission Command).

Friction and Complex Adaptive Systems Clausewitz’ concept of friction is an excellent metaphor to understand the behaviour of complex adaptive systems such as an enterprise (or indeed any human organisation). Bungay argues that friction creates three gaps:

    • First, a knowledge gap arises when we engage in planning or acting due to the necessarily imperfect state of the information we have to hand, and our need to making assumptions and interpret that information.
    • Second, an alignment gap is the result of people failing to do things as planned, perhaps due to conflicting priorities, misunderstandings, or simply someone forgetting or ignoring some element of the plan.
  • Finally, there is an effects gap due to unpredictable changes in the environment, perhaps caused by other actors, or unexpected side effects producing outcomes that differ from those we anticipated.
Friction create three gaps, and how to manage them

Figure 2. Friction create three gaps, and how to manage them

 

This principle is applied in multiple contexts:

Budgeting and financial management

  • Instead of a traditional budgeting process which requires all spending for the next year to be planned and locked down based on detailed projections and business plans, we set out high-level objectives across multiple perspectives such as people, organisation, operations, market, and finance that are reviewed regularly. This kind of exercise can be used at multiple levels, with resources allocated dynamically when needed and the indicators reviewed on a regular basis.

Program management

  • Instead of creating detailed, upfront plans on the work to be done and then breaking that work down into tiny little bits distributed to individual teams, we specify at the program level only the measurable objectives for each iteration. The teams then work out how to achieve those objectives, including collaborating with other teams and continuously integrating and testing their work to ensure they will meet the program-level objectives.

Process improvement

  • Working to continuously improve processes is a key element of the TPS and a powerful tool to transform organisations. We present the Improvement Kata in which we work in iterations, specifying target objectives for processes and providing the people who operate the processes the time and resources to run experiments they need to meet the target objectives for the next iteration.

Crucially, these mission-based processes must replace the command and control processes, not run alongside them.

The long-term value of an enterprise is not captured by the value of its products and intellectual property but rather by its ability to continuously increase the value it provides to customers — and to create new customers — through innovation.

Technology adoption lifecycle, from Dealing with Darwin by Geoffrey A. Moore, 2006

Figure 3. Technology adoption lifecycle, from Dealing with Darwin by Geoffrey A. Moore, 2006

For this vision to become reality, there are two key assumptions that must be tested: the value hypothesis and the growth hypothesis.

We then design an experiment, called the minimum viable product, which we build in order to gather the necessary data from real customers to determine if we have a product/market fit. If our hypothesis is incorrect, we pivot, coming up with a new value hypothesis based on what we learned, following the steps above, every iteration will result in validated learning.

What is an Option?

Purchasing an option gives us the right, but not the obligation, to do something in the future (typically to buy or sell an asset at a fixed price). Options have a price and an expiry date. Investing a fixed amount of time and money to investigate the economic parameters of an idea — be it a business model, product, or an innovation such as a process change — is an example of using optionality to manage the uncertainties of the decision to invest further.

Optionality is a powerful concept that lets you defer decisions on how to achieve a desired outcome by exploring multiple possible approaches simultaneously.

“When we decided to do a microprocessor, in hindsight, I think I made two great decisions. I trusted the team, and gave them two things that Intel and Motorola had never given their people: the first was no money and the second was no people. They had to keep it simple.”

Whenever you hear of a new IT project starting up with a large budget, teams of tens or hundreds of people, and a timeline of many months before something actually gets shipped, you can expect the project will go over time and budget and not deliver the expected value.

Sadly, however, whether the project “succeeds” according to these criteria is irrelevant and insignificant when compared to whether we actually created value for customers and for our organisation. Data gathered from evolving web-based systems reveals that the plan-based approach to feature development is very poor at creating value for customers and the organisation. Amazon & Microsofts research reveals the “humbling statistics”: 60%–90% of ideas do not improve the metrics they were intended to improve. Based on experiments at Microsoft:

  • 1/3 of ideas created a statistically significant positive change,
  • 1/3 produced no statistically significant difference, and
  • 1/3 created a statistically significant negative change.

Due to a cognitive bias known as the planning fallacy, executives tend to “make decisions based on delusional optimism rather than on a rational weighing of gains, losses, and probabilities. They overestimate benefits and underestimate costs. They spin scenarios of success while overlooking the potential for mistakes and miscalculations. As a result, they pursue initiatives that are unlikely to come in on budget or on time or to deliver the expected returns — or even to be completed.”

Finally, because the project approach judges people according to whether work is completed on time and on budget, not based on the value delivered to customers, productivity gets measured based on output rather than outcomes.

We create an unsustainable “hero culture” that rewards overwork and high utilisation (making sure everybody is busy) rather than doing the least possible work to achieve the desired outcomes.

We describe how to run large-scale programs of work using the following principles:

  1. Define, measure, and manage outcomes rather than output. Applying the Principle of Mission, we specify “true north” for our program of work — our ideal stakeholder outcomes. Then, at the program level, we work iteratively, specifying for each iteration the measurable program-level outcomes we want to achieve. How to achieve these outcomes is delegated to teams working within the program. Based on the feedback from real customers after each iteration, we work to improve quality of demand, improve speed, and improve quality of outcomes.
  2. Manage for throughput rather than capacity or utilisation. We implement Kanban principles by making all work visible and limiting work in process. We then aim to stop starting work and start finishing it as soon as possible. We continuously undertake process improvement work to reduce lead time — the time it takes to deliver work — throughout the system. We use continuous delivery and work in small increments to make it cheap and low risk to deliver work in small batches with easier feedback loops.
  3. Ensure people are rewarded for favouring a long-view system-level perspective over pursuing short-term functional goals. People should be rewarded for continuous and effective (win-win) collaboration, for minimising the amount of work required to achieve the desired outcomes, and for reducing the complexity of the systems we create to enable these outcomes. People should not be punished when failures occur; rather, we must build a culture of experimentation and collaboration, design systems which make it safe to fail, and put in place processes so we can learn from our mistakes and use this information to make our systems more resilient.

Balancing the Enterprise Portfolio

The 3 product or business horizons

Figure 4. The 3 product or business horizons

The problems occur when the acquired company — working on a horizon 3 or 2 product — is subjected to the horizon 1 governance, financial targets, and management structures of the acquiring enterprise, completely destroying its ability to innovate.

Our hypothesis is that organisations survive and grow in the medium and long term by balancing the ability to continuously explore potential new business models with effective exploitation of existing ones.

Intuit uses a simple model to balance horizons 1, 2, and 3, Google follows a similar model, but with different allocations:

  • 70% to Horizon 1
  • 20% to Horizon 2
  • 10% to Horizon 3

Part II. Explore

When faced with a new opportunity or a problem to be solved, our human instinct is to jump straight to a solution without adequately exploring the problem space, testing the assumptions inherent in the proposed solution, or challenging ourselves to validate the solution with real users.

Our mission would be to prevent anybody from commencing a major program to solve the problem or pursue the opportunity until they do the following:

  • Define the measurable business outcome to be achieved
  • Build the smallest possible prototype capable of demonstrating measurable progress towards that outcome
  • Demonstrate that the proposed solution actually provides value to the audience it is designed for

“Even in projects with very uncertain development costs, we haven’t found that those costs have a significant information value for the investment decision. The single most important unknown is whether the project will be canceled…The next most important variable is utilisation of the system, including how quickly the system rolls out and whether some people will use it at all.”  Thus the business case essentially becomes a science fiction novel based in an universe that is poorly understood — or which may not even exist! Meanwhile significant time is wasted on detailed planning, analysis, and estimation, which provides large amounts of information with extremely limited value.

There are two factors we care about in a business plan. The first is the sensitivity of the key metric to the various variables in the business case. The second is the level of uncertainty in the variables to which the key metric is sensitive. Given distributions and ranges for the key variables, a simple but powerful approach is to perform a Monte Carlo simulation to work out the possible outcomes.

We should stop using the word “requirements” in product development, at least in the context of nontrivial features. What we have, rather, are hypotheses. We believe that a particular business model, or product, or feature, will prove valuable to customers. But we must test our assumptions. We can take a scientific approach to testing these assumptions by running experiments.

In Running Lean (O’Reilly), Ash Maurya explains how to execute a Lean Startup model:

  • Do not spend a lot of time creating a sophisticated business model. Instead, design a simplified business model canvas which captures and communicates the key operating assumptions of your proposed business model.
  • Gather information to determine if you have a problem worth solving — meaning that it is both solvable and people will pay for it to be solved. If both of these conditions obtain, we have achieved a problem/solution fit.
  • Then, design a minimum viable product (MVP) — an experiment designed to maximize learning from potential early adopters with minimum effort. In the likely case that the results of the MVP invalidate your product hypothesis, pivot and start again. Continue this process until you decide to abandon the initial problem, run out of resources, or discover a product/market fit. In the latter case, exit the explore phase and proceed to exploit the validated model.
  • Throughout this process, update the business model canvas based on what you learn from talking to customers and testing MVPs.

The purpose of measurement is not to gain certainty but to reduce uncertainty. The job of an experiment is to gather observations that quantitatively reduce uncertainty. The key principle to bear in mind is this: when the level of uncertainty of some variable is high, we need very little information to significantly reduce that uncertainty.

Definition of Measurement

Measurement:  A quantitively expressed reduction of uncertainty based on one or more observations.

This definition may seem counterintuitive unless you have experience running experiments in a scientific context. In experimental science, the result of a measurement is never simply a single value. It is, rather, a probability distribution which represents the range of possible values. Any measurement that doesn’t indicate the precision of the result is considered practically meaningless. For example, a measurement of my position with a precision of 1 meter is far more valuable than that same position with a precision of 500 miles. The point of investing in measurement in a scientific context is to reduce our uncertainty about the actual value of some quantity. Thus, in particular, if we express our estimates as precise numbers (as opposed to ranges), we are setting ourselves up for failure: the chance of us meeting a date 6 months in the future precisely to the day is practically zero.

Game theory actually provides a formula for the expected value of information (EVI). Hubbard defines the value of information as follows: “Roughly put, the value of information equals the chance of being wrong times the cost of being wrong.

The cost of being wrong — that is, what is lost if your decision doesn’t work out — is called an opportunity loss. When we multiply the opportunity loss by the chance of a loss, we get the expected opportunity loss (EOL). Calculating the value of information boils down to determining how much it will reduce EOL.

The OODA loop

Figure 4. The OODA loop

Boyd’s theory of maneuver warfare. OODA stands for observe, orient, decide, act, the four activities that comprise the loop.

Deming's Plan Do Check Act cycle

Figure 5. Deming’s Plan Do Check Act cycle

Deming cycle

When everybody in the organisation has been trained to employ the scientific approach to innovation as part of their daily work, we will have created a generative culture.

Traditional project planning versus Lean Startup Skill or behavior

Figure 6. Traditional project planning versus Lean Startup Skill or behaviour

Discovery is a rapid, time-boxed, interactive set of activities that integrates the practices and principles of design thinking and Lean Startup. “Design thinking takes a solution-focused approach to problem solving, working collaboratively to iterate an endless, shifting path towards perfection. It works towards product goals via specific ideation, prototyping, implementation, and learning steps to bring the appropriate solution to light.”

As Dan Pink argues in Drive, there are three key elements to consider when building an engaged and highly motivated team. First, success requires a shared sense of purpose in the entire team. The vision needs to be challenging enough for the group to have something to aspire to, but clear enough so that everyone can understand what they need to do. Second, people must be empowered by their leaders to work autonomously to achieve the team objectives. Finally, people need the space and opportunity to master their discipline, not just to learn how to achieve “good enough.”

Go Gamestorming

Gamestorming by David Gray et al. and the supporting Go Gamestorming Wiki, contain numerous games that encourage engagement and creativity while bringing structure and clarity to collaborative ideation, innovation, and improvement workshops.

Divergent Thinking

Figure 7. Divergent Thinking

Divergent thinking is the ability to offer different, unique, or variant ideas adherent to one theme; convergent thinking is the ability to identify a potential solution for a given problem. We start exploration with divergent thinking exercises designed to generate multiple ideas for discussion and debate. We then use convergent thinking to identify a possible solution to the problem. From here, we are ready to formulate an experiment to test it.

 

Business Model Canvas

Figure 8. Business Model Canvas

The Business Model Canvas, shown in Figure 8, was created by Alex Osterwalder and Yves Pigneur along with 470 co-creators as a simple, visual business model design generator. It is a strategic management and entrepreneurial tool that enables teams to describe, design, challenge, invent, and pivot business models.The Business Model Canvas, freely available at http://www.businessmodelgeneration.com/canvas

Beyond the template itself, Osterwalder also came up with four levels of strategic mastery of competing on business models to reflect the strategic intent of an organization:

  • Level 0 Strategy The Oblivious focus on product/value propositions alone rather than the value proposition and the business model.
  • Level 1 Strategy The Beginners use the Business Model Canvas as a checklist.
  • Level 2 Strategy The Masters outcompete others with a superior business model where all building blocks reinforce each other (e.g., Toyota, Walmart, Dell).
  • Level 3 Strategy The Invincible continuously self-disrupt while their business models are still successful (e.g., Apple, Amazon).

There are a number of canvas created by others that focus on product development:

  • The Lean Canvas: Makes the assumption that product/market fit is the riskiest hypothesis that must be tested.
  • The Opportunity Canvas: Focuses discussions about what we’re building and why, then helps you understand how satisfying those specific customers and users furthers the organisation’s overall strategy.
  • Value Proposition Canvas: Describes how our products and services create customer gains and how they create benefits our customers expect, desire, or would be interesting in using.

Minimal Viable Product Definition

Confusingly, people often refer to any validation activity anywhere along on this spectrum as an MVP, overloading the term and understanding of it in the organisation or wider industry. Marty Cagan, author of Inspired: How to Create Products Customers Love and ex-SVP for eBay, notably uses the term “MVP test” to refer to what Eric Ries calls an MVP. Cagan defines an MVP as “the smallest possible product that has three critical characteristics: people choose to use it or buy it; people can figure out how to use it; and we can deliver it when we need it with the resources available — also known as valuable, usable, and feasible,” to which we add “delightful,” since design and aesthetics are also as essential for an MVP as for a finished product.

Minimal Viable Product - Usable, Valuable, Feasible & Delightful

Figure 9. Minimal Viable Product – Usable, Valuable, Feasible & Delightful

MVPs, as shown in figure 10. do not guarantee success; they are designed to test the assumptions of a problem we wish to solve without over-investing. By far the most likely outcome is that we learn our assumptions were invalid and we need to pivot or stop our approach. Our ultimate goal is to minimize investment when exploring solutions until we are confident we have discovered the right product — then, exploit the opportunity by adding further complexity and value to build the product right.

Lean MVP methods

Figure 10. An example set of types of lean MVPs

Paul Graham, http://paulgraham.com/ds.html

Cagan defines vision as the shared understanding that “describes the types of services you intend to provide, and the types of customers you intend to serve, typically over a 2-5 year timeframe”

One Metric That Matters

One Metric That Matters (OMTM) is a single metric that we prioritize as the most important to drive decisions depending on the stage of our product lifecycle and our business model. It is not a single measure that we will use throughout our product lifetime: it will change over time depending on the problem area we wish to address. We focus on One Metric That Matters to:

  • Answer the most pressing question we have by linking it to the assumptions in the hypothesis we want to test
  • Create focus, conversation, and thought to identify problems and stimulate improvement
  • Provide transparency and a shared understanding across the team and wider organization
  • Support a culture of experimentation by basing it on rates or ratios, not averages or totals, relevant to our historical dataset It should not be a lagging metric such as return on investment

(ROI) or customer churn, both of which measure output after the fact. Lagging indicators become interesting later when we have achieved a product/market fit. By initially focusing on leading metrics, we can get an indication of what is likely to happen — and address a situation quicker to try and change the outcomes going forward.

The purpose of the OMTM is to gain objective evidence that the changes we are making to our product are having a measurable impact on the behavior of our customers. Ultimately we are seeking to understand:

  • Are we making progress (the what)?
  • What caused the change (the why)?
  • How do we improve (the how)?

Use A3 Thinking as a Systematic Method for Realizing Improvement Opportunities

  • A3 Thinking is composed of 7 elements embedding the Plan-Do-Check-Act cycle of experimentation:
    • Background
    • Current condition and problem statement
    • Goal statement
    • Root-cause analysis
    • Countermeasures
    • Check/confirmation effect
    • Followup actions and report

Other examples include the Five Ws and One H (Who, What, Where, When, Why, How).

Figure 11. A3 Thinking

Figure 11. Example of A3 Thinking on a page

 

Remember, metrics are meant to hurt — not to make us feel like we are winning. They must be actionable and trigger a change in our behavior or understanding. We need to consider these two key questions when deciding on what our OMTM will be:

What is the problem we are trying to solve?

  • Product development
  • Tool selection
  • Process improvement

What stage of the process are we at?

  • Problem validation
  • Solution validation
  • MVP validation

Eric Ries introduced the term innovation accounting to refer to the rigorous process of defining, experimenting, measuring, and communicating the true progress of innovation for new products, business models, or initiatives.

Profitability to Sales ratio for early stage innovations

Figure 12. Profitability to Sales ratio for early stage innovations

 

Measurement Fallacy

Unfortunately, often what we tend to see collected and socialized in organizations are vanity metrics designed to make us feel good but offering no clear guidance on what action to take. In Lean Analytics, Alistair Croll and Benjamin Yoskovitz note, “If you have a piece of data on which you cannot act, it’s a vanity metric…A good metric changes the way you behave. This is by far the most important criterion for a metric: what will you do differently based on changes in the metric?”

Examples of vanity versus actionable metrics

List of vanity metrics vs actionable metrics

Figure 13. List of vanity metrics vs actionable metrics

Vanity vs. Actionable metrics

  • Number of visits vs. Funnel metrics, cohort analysis
  • Time on site, number of pages vs. Number of sessions per user
  • Emails collected vs. Email action
  • Number of downloads vs. User activations
  • Tool usage vs. Tooling effect
  • Number of trained people vs. Higher throughput

“If you can define the outcome you really want, give examples of it, and identify how those consequences are observable, then you can design measurements that will measure the outcomes that matter. The problem is that, if anything, managers were simply measuring what seemed simplest to measure (i.e., just what they currently knew how to measure), not what mattered most.”

Pirate AARRR metrics

Figure 14. Pirate AARRR metrics

Pirate metrics: AARRR!

Sample innovation scorecard

Figure 15. Sample innovation scorecard

Example innovation scorecard

In terms of governance, the most important thing to do is have a regular weekly or fortnightly meeting which includes the product and engineering leads within the team, along with some key stakeholders from outside the team (such as a leader in charge of the Horizon 3 portfolio and its senior product and engineering representatives).

In the early stages, we must spend less time worrying about growth and focus on significant customer interaction. We may go so as far as to only acquire customers individually — too many customers too early can lead to a lack of focus and slow us down. We need to focus on finding passionate early adopters to continue to experiment and learn with. Then, we seek to engage similar customer segments to eventually “cross the chasm” to wider customer acquisition and adoption.

Our goal should to be to create a pull system for customers that want our product, service, or tools, not push a mandated, planned, and baked solution upon people that we must “sell” or require them to use.

Our runway should be a list of hypotheses to test, not a list of requirements to build. When we reward our teams for their ability to deliver requirements, it’s easy to rapidly bloat our products with unnecessary features — leading to increased complexity, higher maintenance costs, and limited ability to change. Features delivered are not a measure of success, business outcomes are.

User Story Map sample

Figure 16. User Story Map sample

Create a Story Map to Tell the Narrative of the Runway of Our Vision Story maps are tool developed by Jeff Patton, explained in his book, User Story Mapping. As Patton states, “Your software has a backbone and a skeleton — and your map shows it.”

Our advice is this. There are two practices that should be adhered to from the beginning that will allow us to pay down technical debt later on: continuous integration and a small number of basic unit and user-journey tests.

Having forced ourselves to do something that should be unnatural to engineers — hack out embarrassingly crappy code and get out of the building to get validation from early on — we must then pull the lever hard in the other direction, kill the momentum, and transition our focus from building the right thing to building the thing right. Needless to say, this requires extreme discipline.

In The Lean Startup, Eric Ries argues that there are three key strategies for growth — choose one:

  • Viral
    • Includes any product that causes new customers to sign up as a necessary side effect of existing customers’ normal usage: Facebook, MySpace, AIM/ICQ, Hotmail, Paypal. Key metrics are acquisition and referral, combined into the now-famous viral coefficient.
  • Pay
    • Is when we use a fraction of the lifetime value of each customer and flow that back into paid acquisition through search engine marketing, banner ads, public relations, affiliates, etc. The spread between your customer lifetime value and blended customer acquisition cost determines either your profitability or your rate of growth, and a high valuation depends on balancing these two factors. Retention is the key goal in this model. Examples are Amazon and Netflix.
  • Sticky
    • Means something causes customers to become addicted to the product, and no matter how we acquire a new customer, we tend to keep them. The metric for sticky is the “churn rate” — the fraction of customers in any period who fail to remain engaged with our product or service. This can lead to exponential growth. For eBay, stickiness is the result of the incredible network effects of their business. For enterprises, however, there are further growth options to consider:
  • Expand
    • Is building an adaptive initial business model that we could simply evolve and expand further by opening up new geographies, categories, and adjacencies. Amazon has executed this strategy excellently, moving from selling books to an e-commerce store offering new retail categories. With this growth strategy, the initial targeted market should be large enough to support multiple phases of growth over time.
  • Platform
    • Once we have a successful core product, we transform it into a platform around which an “ecosystem” of complementary products and services is developed by both internal and external providers. Microsoft did this with Windows by creating MS Office, Money, and other support packages, including those developed by external vendors. Other platform examples include Apple’s AppStore, Salesforce’s Force.com, and Amazon’s Marketplace and Web Services offerings.

Part 111. Exploit

Water Scrum Fall

Figure 17. Water — Scrum — Fall

Figure 17. shows a typical “Water — Scrum — Fall method” project based paradigm established post WW2 to work on large military / aviation / space based projects. It represents a traditional phase-gate project paradigm, where no value is delivered until units were fully manufactured. Through detailed specifications very little change occurred in response to new information. None of these criteria apply to software based systems today.

We will present the following principles for lean-agile product development at scale:

  • Implement an iterative continuous improvement process at the leadership level with concise, clearly specified outcomes to create alignment at scale, following the Principle of Mission.
  • Work scientifically towards challenging goals, which will lead you to identifying and removing – or avoiding – no value-add activity.
  • Use continuous delivery to reduce the risk of releases, decrease cycle time and make it economic to work in small batches.
  • Evolve an architecture that supports loosely coupled customer facing teams which have autonomy in how they work to achieve the program-level outcomes
  • Reduce batch sizes and take experimental approach to the product development process
  • Increase and amplify feedback loops to make smaller, more frequent decisions based on the information we learn from performing our work to maximise customer value.

Achieving high performance in organisations that treat software as a strategic advantage relies on alignment between the IT function and the rest of the organisation, along with the ability of IT to execute.

The researchers concluded that to achieve high performance, companies that rely on software should focus first and foremost on their ability to execute, build reliable systems, and work to continually reduce complexity. Only then will pursuing alignment with business priorities pay off.

They approached this by using activity accounting — allocating costs to the activities the team is performing.

Money spent on support is generally serving failure demand, as distinct from value demand, which was only driving 5% of the team’s costs.

With Improvement Kata, everybody should be running experiments on a daily basis. Each day, people in the team go through answering the following five questions:

  1. What is the target condition?
  2. What is the actual condition now?
  3. What obstacles do you think are preventing you from reaching the target condition? Which one are you addressing now?
  4. What is your next step? (Start of PDCA cycle.) What do you expect?
  5. When can we go and see what we learned from taking that step?

As we continuously repeat the cycle, we reflect on the last step taken to introduce improvement. What did we expect? What actually happened? What did we learn? We might work on the same obstacle for several days.

Tom Gilb proposed in his 1988 work Principles of Software Engineering Management:

[perfectpullquote align=”full” bordertop=”false” cite=”” link=”” color=”” class=”” size=””]

We must set measurable objectives for each next small delivery step. Even these are subject to constant modification as we learn about reality. It is simply not possible to set an ambitious set of multiple quality, resource, and functional objectives, and be sure of meeting them all as planned. We must be prepared for compromise and trade-off. We must then design (engineer) the immediate technical solution, build it, test it, deliver it — and get feedback. This feedback must be used to modify the immediate design (if necessary), modify the major architectural ideas (if necessary), and modify both the short-term and the long-term objectives (if necessary).

[/perfectpullquote]

Even today, many people think that Lean is a management-led activity and that it’s about simply cutting costs. In reality, it requires investing to remove waste and reduce failure demand — it is a worker-led activity that, ultimately, can continuously drive down costs and improve quality and productivity.

It’s often hard to make the outcome of improvement work tangible — which is why it’s important to make it visible by activity accounting, including measuring the cycle time and the time spent serving failure demand such as rework.

Identifying Value and Increase Flow

The actual number used to prioritise features is known as cost of delay divided by duration (or “CD3”). It is calculated as cost of delay for a feature divided by the amount of time we estimate it will take to develop and deliver that feature. This takes into account the fact that we have limited people and resources available to complete work, and that if a particular feature takes a long time to develop it will “push out” other features.

The best way to understand where problems start is by performing an activity called value stream mapping.

Sample Value Stream Map

Figure 18. Sample Value Stream Map

 

We can visualise the dynamics of the value stream by creating a cumulative flow diagram that shows the amount of work in each queue and process block over time. A cumulative flow diagram showing delivery progress over time, the different phases/queues work flows from (from Backlog to Validated Learning), identifies work in progress (WIP) and average lead time as shown in Figure 19.

Cumulative Flow Diagram

Figure 19. Cumulative Flow Diagram

 

The Kanban Method offers a comprehensive way to manage the flow of work through the product development value stream by using the following practices:

  • Visualise workflow by creating a board showing the current work in process within the value stream in real time.
  • Limit work in process by setting WIP limits for each process block and queue within a value stream, and updating them in order to trade off lead time against utilisation (how busy people are).
  • Define classes of service for different types of work and the processes through which they will be managed, to ensure that urgent or time-sensitive work is prioritised appropriately.
  • Create a pull system by agreeing on how work will be accepted into each process block when capacity becomes available — perhaps by setting up a regular meeting where stakeholders decide what work should be prioritised based on available capacity.
  • Hold regular “operational reviews” for the stakeholders within each process block to analyse their performance and update WIP limits, classes of service, and the method through which work is accepted.
Kanban board

Figure 20. Sample Kanban board

Reducing lead times in this way requires that there be sufficient slack in the system to manage the WIP effectively.

The Fundamentals of Continuous Delivery

There are two golden rules of continuous delivery that must be followed by everybody:

  1. The team is not allowed to say they are “done” with any piece of work until their code is in trunk on version control and releasable (for hosted services the bar is even higher — “done” means deployed to production). In The Lean Startup, Eric Ries argues that for new features that aren’t simple user requests, the team must also have run experiments on real users to determine if the feature achieves the desired outcome.
  2. The team must prioritise keeping the system in a deployable state over doing new work. This means that if at any point we are not confident we can take whatever is on trunk in version control and deliver it to users through an automated, push-button process, we need to stop working and fix that problem.

To find out if you’re really doing CI, ask your team the following questions:

  • Are all the developers on the team checking into trunk (not just merging from trunk into their branches or working copies) at least once a day? In other words, are they doing trunk-based development and working in small batches?
  • Does every change to trunk kick off a build process, including running a set of automated tests to detect regressions?
  • When the build and test process fails, does the team fix the build within a few minutes, either by fixing the breakage or by reverting the change that caused the build to break?

If the answer to any of these questions is “no,” you aren’t practicing continuous integration.

The most important principle for doing low-risk releases is this: decouple deployment and release. To understand this principle, we must first define these terms. Deployment is the installation of a given version of a piece of software to a given environment. The decision to perform a deployment — including to production — should be a purely technical one. Release is the process of making a feature, or a set of features, available to customers. Release should be a purely business decision.

Organisations and continuous delivery maturity

Figure 21. As organizations work to implement continuous delivery, they will have to change the way they approach version control, software development, architecture, testing, and infrastructure and database management

However, we do not propose solutions to achieve these goals or write stories or features (especially not “epics”) at the program level. Rather, it is up to the teams within the program to decide how they will achieve these goals. This is critical to achieving high performance at scale, for two reasons:

  • The initial solutions we come up with are unlikely to be the best. Better solutions are discovered by creating, testing, and refining multiple options to discover what best solves the problem at hand.
  • Organisations can only move fast at scale when the people building the solutions have a deep understanding of both user needs and business strategy and come up with their own ideas.

A program-level backlog is not an effective way to drive these behaviours — it just reflects the almost irresistible human tendency to specify “the means of doing something, rather than the result we want.”

Getting to Target Conditions

Gojko Adzic presents a technique called impact mapping to break down high-level business goals at the program level into testable hypotheses. Adzic describes an impact map as “a visualisation of scope and underlying assumptions, created collaboratively by a cross-functional group of stakeholders. It is a mind-map grown during a discussion facilitated by answering the following questions:

  1. Why?
  2. Who?
  3. How?
  4. What?”
Impact Mapping

Figure 22. Impact Mapping

Once we have a prioritised list of target conditions and impact maps created collaboratively by technical and business people, it is up to the teams to determine the shortest possible path to the target condition. This tool differs in important ways from many standard approaches to thinking about requirements. Here are some of the important differences and the motivations behind them:

  • There are no lists of features at the program level
    • Features are simply a mechanism for achieving the goal. To paraphrase Adzic, if achieving the target condition with a completely different set of features than we envisaged won’t count as success, we have chosen the wrong target condition. Specifying target conditions rather than features allows us to rapidly respond to changes in our environment and to the information we gather from stakeholders as we work towards the target condition. It prevents “feature churn” during the iteration. Most importantly, it is the most effective way to make use of the talents of those who work for us; this motivates them by giving them an opportunity to pursue mastery, autonomy, and purpose.
  • There is no detailed estimation
    • We aim for a list of target conditions that is a stretch goal — in other words, if all our assumptions are good and all our bets pay off, we think it would be possible to achieve them. However, this rarely happens, which means we may not achieve some of the lower-priority target conditions. If we are regularly achieving much less, we need to rebalance our target conditions in favour of process improvement goals. Keeping the iterations short — 2–4 weeks initially — enables us to adjust the target conditions in response to what we discover during the iteration. This allows us to quickly detect if we are on a wrong path and try a different approach before we overinvest in the wrong things.
  • There are no “architectural epics”
    • The people doing the work should have complete freedom to do whatever improvement work they like (including architectural changes, automation, and refactoring) to best achieve the target conditions. If we want to drive out particular goals which will require architectural work, such as compliance or improved performance, we specify these in our target conditions.

First, we create a hypothesis based on our assumption. In Lean UX, Josh Seiden and Jeff Gothelf suggest the template shown  below to use as a starting point for capturing hypotheses.

[perfectpullquote align=”full” bordertop=”false” cite=”” link=”” color=”” class=”” size=””]

We believe that
[building this feature]
[for these people]
will achieve [this outcome].
We will know where we are successful when we see [this signal from the market].

[/perfectpullquote]

There are many different ways to conduct research, generate results and assess against your hypothesis. See figure 22. below which shows some different methods of user research, across four quadrants of generative, evaluative, quantitative and qualitative. For more on different types of user research, read UX for Lean Startups (O’Reilly) by Laura Klein.

Types of User Research

Figure 23. Types of User Research across four quadrants of generative, evaluative, quantitative and qualitative

The key outcome of an experiment is information: we aim to reduce the uncertainty as to whether the proposed work will achieve the target condition.

Ronny Kohavi, who directed Amazon’s Data Mining and Personalisation group before joining Microsoft as General Manager of its Experimentation Platform, reveal that 60%–90% of ideas do not improve the metric they were intended to improve. Thus if we’re not running experiments to test the value of new ideas before completely developing them, the chances are that about 2/3 of the work we are doing is of either zero or negative value to our customers — and certainly of negative value to our organisation, since this work costs us in three ways.

They were able to calculate a dollar amount for the revenue impact of performance improvements, discovering that “an engineer that improves server performance by 10 msec more than pays for his fully-loaded annual costs.”

One of the most common challenges encountered in software development is the focus of teams, product managers, and organisations on managing cost rather than value. This typically manifests itself in undue effort spent on zero-value-add activities such as detailed upfront analysis, estimation, scope management, and backlog grooming. These symptoms are the result of focusing on maximising utilisation (keeping our expensive people busy) and output (measuring their work product) — instead of focusing on outcomes, minimising the output required to achieve them, and reducing lead times to get fast feedback on our decisions.

Implement Mission Command

CEO Jeff Bezos turned this problem into an opportunity. He wanted Amazon to become a platform that other businesses could leverage, with the ultimate goal of better meeting customer needs. With this in mind, he sent a memo to technical staff directing them to create a service-oriented architecture, which Steve Yegge summarizes thus:

  1. All teams will henceforth expose their data and functionality through service interfaces.
  2. Teams must communicate with each other through these interfaces.
  3. There will be no other form of interprocess communication allowed: no direct linking, no direct reads of another team’s data store, no shared-memory model, no back-doors whatsoever. The only communication allowed is via service interface calls over the network.
  4. It doesn’t matter what technology they use. HTTP, Corba, Pubsub, custom protocols — doesn’t matter. Bezos doesn’t care.
  5. All service interfaces, without exception, must be designed from the ground up to be externalisable. That is to say, the team must plan and design to be able to expose the interface to developers in the outside world. No exceptions.
  6. Anyone who doesn’t do this will be fired.

Bezos hired West Point Academy graduate and ex-Army Ranger Rick Dalzell to enforce these rules. Bezos mandated another important change along with these rules: each service would be owned by a cross-functional team that would build and run the service throughout its lifecycle. As Werner Vogels, CTO of Amazon, says, “You build it, you run it.”

Amazon stipulated that all teams must conform to the “two pizza” rule: they should be small enough that two pizzas can feed the whole team — usually about 5 to 10 people. This limit on size has four important effects:

  1. It ensures the team has a clear, shared understanding of the system they are working on. As teams get larger, the amount of communication required for everybody to know what’s going on scales in a combinatorial fashion.
  2. It limits the growth rate of the product or service being worked on. By limiting the size of the team, we limit the rate at which their system can evolve. This also helps to ensure the team maintains a shared understanding of the system.
  3. Perhaps most importantly, it decentralises power and creates autonomy, following the Principle of Mission. Each two-pizza team (2PT) is as autonomous as possible. The team’s lead, working with the executive team, would decide upon the key business metric that the team is responsible for, known as the fitness function, that becomes the overall evaluation criteria for the team’s experiments. The team is then able to act autonomously to maximise that metric
  4. Leading a 2PT is a way for employees to gain some leadership experience in an environment where failure does not have catastrophic consequences — which “helped the company attract and retain entrepreneurial talent.”

To avoid the communication overhead that can kill productivity as we scale software development, Amazon leveraged one of the most important laws of software development — Conway’s Law: “Organisations which design systems…are constrained to produce designs which are copies of the communication structures of these organisations.” One way to apply Conway’s Law is to align API boundaries with team boundaries. In this way we can distribute teams all across the world. Organisations often try to fight Conway’s Law. A common example is splitting teams by function, e.g., by putting engineers and testers in different locations (or, even worse, by outsourcing testers). Another example is when the front end for a product is developed by one team, the business logic by a second, and the database by a third. Since any new feature requires changes to all three, we require a great deal of communication between these teams, which is severely impacted if they are in separate locations. Splitting teams by function or architectural layer typically leads to a great deal of rework, disagreements over specifications, poor handoffs, and people sitting idle waiting for somebody else.

In truly decentralised organisations, we follow the principle of subsidiarity: by default, decisions should be made by the people who are directly affected by those decisions. Higher levels of bureaucracy should only perform tasks that cannot be performed effectively at the local level — that is, the authority of higher levels of bureaucracy should be subsidiary to that of the local levels.

We ensure teams are aligned by using the Improvement Kata that is, by having iterations at the program level with defined target conditions and having teams collaborate to work out how to achieve them. Here are some strategies enterprises have successfully applied to create autonomy for individual teams:

  • Give teams the tools and authority to push changes to production
    • In companies such as Amazon, Netflix, and Etsy, teams, in many cases, do not need to raise tickets and have changes reviewed by an advisory board to get them deployed to production. In fact, in Etsy this authority is devolved not just to teams but to individual engineers. Engineers are expected to consult with each other before pushing changes, and certain types of high-risk changes (such as database changes or changes to a PCI-DSS cardholder data environment) are managed out of band. But in general, engineers are expected to run automated tests and consult with other people on their team to determine the risk of each change — and are trusted to act appropriately based on this information. ITIL supports this concept in the form of standard changes. All changes that launch dark (and which thus form the basis of A/B tests) should be considered standard changes. In return, it’s essential that teams are responsible for supporting their changes.
  • Ensure that teams have the people they need to design, run, and evolve experiments
    • Each team should have the authority and necessary skills to come up with a hypothesis, design an experiment, put an A/B test into production, and gather the resulting data. Since the teams are small, this usually means they are cross-functional with a mix of people: some generalists with one or two deep specialisms (sometimes known as “T-shaped” people8), along with specialist staff such as a database administrator, a UX expert, and a domain expert. This does not preclude having centralised teams of specialists who can provide support to product teams on demand.
  • Ensure that teams have the authority to choose the their own toolchain
    • Mandating a toolchain for a team to use is an example of optimising for the needs of procurement and finance rather than for the people doing the work. Teams must be free to choose their own tools. One exception to this is the technology stack used to run services in production. Ideally, the team will use a platform or infrastructure service (PaaS or IaaS) provided by internal IT or an external provider, enabling teams to self-service deployments to testing and (where applicable) production environments on demand through an API (not through a ticketing system or email). If no such system exists, or it is unsuitable, the team should be allowed to choose their own stack — but must be prepared to meet any applicable regulatory constraints and bear the costs of supporting the system in production.
  • Ensure teams do not require funding approval to run experiments
    • The techniques described in this book make it cheap to run experiments, so funding should not be a barrier to test out new ideas. Teams should not require approval to spend money up to a a certain limit
  • Ensure leaders focus on implementing Mission Command
    • In a growing organisation, leaders must continuously work to simplify processes and business complexity, to increase effectiveness, autonomy, and capabilities of the smallest organisational units, and to grow new leaders within these units.

Creating small, autonomous teams makes it economic for them to work in small batches. When done correctly, this combination has several important benefits:

  • Faster learning, improved customer service, less time spent on work that does not add value
  • Better understanding of user needs
  • Highly motivated people
  • Easier to calculate profit and loss 

Architecting for continuous delivery and service orientation means evolving systems that are testable and deployable. Testable systems are those for which we can quickly gain a high level of confidence in the correctness of the system without relying on extensive manual testing in expensive integrated environments. Deployable systems are those that are designed to be quickly, safely, and independently deployed to testing and (in the case of web-based systems) production environments. These “cross-functional” requirements are just as important as performance, security, scalability, and reliability, but they are often ignored or given second-class status.

Amazon did not replace their monolithic Obidos architecture in a “big bang” replacement program. Instead, they moved to a service-oriented architecture incrementally, while continuing to deliver new functionality, using a pattern known as the “strangler application.” As described by Martin Fowler, the pattern involves gradual replacement of a system by implementing new features in a new application that is loosely coupled to the existing system, porting existing functionality from the original application only where necessary.  Over time, the old application is “strangled” — just like a tree enveloped by a tropical strangler fig.

Part IV. Transform

To add further complexity to this problem, many of our traditional approaches to governance, risk, and compliance (GRC), financial management, procurement, vendor/supplier management, and human resources (recruiting, promotion, compensation) create additional waste and bottlenecks. These can only be eliminated when the entire organisation embraces lean concepts and everyone works together in the same direction.

In The Corporate Culture Survival Guide, Schein defines culture as “a pattern of shared tacit assumptions that was learned by a group as it solved its problems of external adaptation and internal integration, that has worked well enough to be considered valid and, therefore, to be taught to new members as the correct way to perceive, think, and feel in relation to those problems.”

Your Startup Is Broken: Inside the Toxic Heart of Tech Culture, provides another perspective, commenting that “our true culture is made primarily of the things no one will say…Culture is about power dynamics, unspoken priorities and beliefs, mythologies, conflicts, enforcement of social norms, creation of in/out groups and distribution of wealth and control inside companies.”

In his management classic The Human Side of Enterprise, Douglas McGregor describes two contrasting sets of beliefs held by managers he observed, which he calls Theory X and Theory Y. Managers who hold Theory X assumptions believe that people are inherently lazy and unambitious and value job security more than responsibility; extrinsic (carrot-and-stick) motivation techniques are the most effective to deal with workers. In contrast, Theory Y managers believe “that employees could and would link their own goals to those of the organisation, would delegate more, function more as teachers and coaches, and help employees develop incentives and controls that they themselves would monitor.”

People involved in non routine work are motivated by intrinsic factors summarised by Dan Pink as:

  1. Autonomy — the desire to direct our own lives.
  2. Mastery — the urge to get better and better at something that matters.
  3. Purpose — the yearning to do what we do in the service of something larger than ourselves.

Culture is hard to change by design. As Schein says, “Culture is so stable and difficult to change because it represents the accumulated learning of a group — the ways of thinking, feeling, and perceiving the world that have made the group successful.”

MIT Sloan Management Review, John Shook, Toyota City’s first US employee, reflected on how that cultural change was achieved:

  • What my NUMMI experience taught me that was so powerful was that the way to change culture is not to first change how people think, but instead to start by changing how people behave — what they do. Those of us trying to change our organisations’ culture need to define the things we want to do, the ways we want to behave and want each other to behave, to provide training and then to do what is necessary to reinforce those behaviours. The culture will change as a result…What changed the culture at NUMMI wasn’t an abstract notion of “employee involvement” or “a learning organisation” or even “culture” at all. What changed the culture was giving employees the means by which they could successfully do their jobs. It was communicating clearly to employees what their jobs were and providing the training and tools to enable them to perform those jobs successfully.

It’s hard to achieve sustained, systemic change without any crisis. In The Corporate Culture Survival Guide, Schein asks if crisis is a necessary condition of successful transformations; his answer is, “Because humans avoid unpredictability and uncertainty, hence create cultures, the basic argument for adult learning is that indeed we do need some new stimulus to upset the equilibrium. The best way to think about such a stimulus is as disconfirmation: something is perceived or felt that is not expected and that upsets some of our beliefs or assumptions…disconfirmation creates survival anxiety — that something bad will happen if we don’t change — or guilt — we realize that we are not achieving our own ideals or goals.”

Old and new approaches to cultural change

Figure 24. Old and new approaches to cultural change

Once people accept the need for change, they are confronted with the fear that they may fail at learning the new skills and behaviour required of them, or that they may lose status or some significant part of their identity — a phenomenon Schein calls learning anxiety. Schein postulates that for change to succeed, survival anxiety must be greater than learning anxiety, and to achieve this, “learning anxiety must be reduced rather than increasing survival anxiety.”

At the beginning of every postmortem, every participant should read aloud the following words, known as the Retrospective Prime Directive: “Regardless of what we discover, we understand and truly believe that everyone did the best job they could, given what they knew at the time, their skills and abilities, the resources available, and the situation at hand.”

Given that the culture of an organisation has such a dominant effect on the performance of individuals, should we care at all about the particular skills and attitudes of individuals? Instead of taking a “bank account” view that focuses on people’s existing capabilities, it’s more important to consider their ability to acquire new skills — particularly in the field of technology where useful knowledge and skills change rapidly.

Dwecks two mindsets - Fixed & Growth Mindset

Figure 25. Dwecks two mindsets – Fixed & Growth Mindset

Google has done a great deal of research into what makes for an effective recruiting process in the context of technology. The top three criteria are:

  • Learning ability, including the ability to “process on the fly” and “pull together disparate bits of information.”
  • Leadership, “in particular emergent leadership as opposed to traditional leadership. Traditional leadership is, were you president of the chess club? Were you vice president of sales? How quickly did you get there? We don’t care. What we care about is, when faced with a problem and you’re a member of a team, do you, at the appropriate time, step in and lead. And just as critically, do you step back and stop leading, do you let someone else? Because what’s critical to be an effective leader in this environment is you have to be willing to relinquish power.”
  • Mindset. “Successful bright people rarely experience failure, and so they don’t learn how to learn from that failure…They, instead, commit the fundamental attribution error, which is if something good happens, it’s because I’m a genius. If something bad happens, it’s because someone’s an idiot or I didn’t get the resources or the market moved.”

Bock goes on to observe that the most successful people at Google “will have a fierce position. They’ll argue like hell. They’ll be zealots about their point of view. But then you say, here’s a new fact, and they’ll go, Oh, well, that changes things; you’re right”. To deal with an uncertain future and still move forward, people should have “strong opinions, which are weakly held.”

Embrace Lean Thinking for Governance, Risk and Compliance

We often hear that Lean Startup principles and the techniques and practices we suggest in this book would never work in large enterprises because of governance. “This won’t meet regulatory requirements.” “That doesn’t fit in our change management process.” “Our team can’t have access to servers or production.” These are just a few examples of the many reasons people have given for dismissing the possibility of changing the way they work. When we hear these objections, we recognise that people aren’t really talking about governance; they are referring to processes that have been put in place to manage risk and compliance and conflating them with governance. Like any other processes within an organisation, those established for managing governance, risk, and compliance (GRC) must be targets for continuous improvement to ensure they contribute to overall value. We refer to “GRC teams.” For clarity, our discussion and examples focus on teams that strongly influence how technology can be used within organisations; the more common ones are the PMO, technical architecture, information security, risk and compliance, and internal audit teams.

Governance is about keeping our organisation on course. It is the primary responsibility of the board of directors, but it applies to all people and other entities working for the organisation. It requires the following concepts and principles to be applied at all levels:

  • Responsibility
    • Each individual is responsible for the activities, tasks, and decisions they make in their day-to-day work and for how those decisions affect the overall ability to deliver value to stakeholders.
  • Authority or accountability
    • There is an understanding of who has the power and responsibility to influence behaviours within the organisation and of how it works.
  • Visibility
    • Everyone at all times can view the outcomes achieved by the organisation and its components, based on current and real data. This, in turn, can be mapped to the organisation’s strategic goals and objectives.
  • Empowerment
    • The authority to act to improve the delivery of value to stakeholders is granted at the right level — to the people who will deal with the results of the decision.

Risk is the exposure we run for the possibility of something unpleasant occurring. We all manage risks daily, at work, home, and play. As it is impossible to eliminate every risk, the question to be answered in managing risk is, “Which risks are you willing to live with?”

Compliance is obedience to laws, industry regulations, legally binding contracts, and even cultural norms. The intention of mandated compliance is usually to protect the interest of stakeholders with regard to privacy of information, physical safety, and financial investments.

Management Is Not Governance COBIT clearly explains the difference between governance and management.

  • Governance ensures that stakeholder needs, conditions, and options are evaluated to determine balanced agreed-on enterprise objectives to be achieved; sets direction through prioritisation and decision making; and monitors performance and compliance against agreed-on direction and objectives.
  • Management plans, builds, runs, and monitors activities in alignment with the direction set by the governance body to achieve the enterprise objectives.

Good GRC management maintains a balance between implementing enough control to prevent bad things from happening and allowing creativity and experimentation to continuously improve the value delivered to stakeholders.

Unfortunately, many GRC management processes within enterprises are designed and implemented within a command-and-control paradigm. They are highly centralised and are viewed as the purview of specialised GRC teams, who are not held accountable for the outcomes of the processes they mandate. The processes and controls these teams decree are often derived from popular frameworks without regard to the context in which they will be applied and without considering their impact on the entire value stream of the work they affect. They often fail to keep pace with technology changes and capabilities that would allow the desired outcomes to be achieved by more lightweight and responsive means. This forces delivery teams to complete activities adding no overall value, create bottlenecks, and increase the overall risk of failure to deliver in a timely manner.

In How to Measure Anything, Douglas Hubbard reports Peter Tippet of Cybertrust discussing “what he finds to be a predominant mode of thinking about [IT security]. He calls it the ‘wouldn’t it be horrible if…’ approach. In this framework, IT security specialists imagine a particularly catastrophic event occurring. Regardless of its likelihood, it must be avoided at all costs. Tippet observes: ‘since every area has a “wouldn’t it be horrible if…” all things need to be done. There is no sense of prioritisation.’” When prioritising work across our portfolio, there must be no free pass for work mitigating “bad things” to jump to the front of the line. Instead, quantify risks by considering their impacts and probabilities using impact mapping and then use Cost of Delay to balance the mitigation work against other priorities. In this way we can manage security and compliance risks using an economic framework instead of fear, uncertainty, and doubt.

When GRC teams do not take a principles-based approach and instead prescribe the rules that teams must blindly follow, the familiar result is risk management theater: an expensive performance that is designed to give the appearance of managing risk but actually increases the chances of unintended negative consequences.

Preventive controls, when executed on the wrong level, often lead to unnecessarily high costs, forcing teams to:

  • Wait for another team to complete menial tasks that can be easily automated and run when needed
  • Obtain approvals from busy people who do not have a good understanding of the risks involved in the decision and thus become bottlenecks
  • Create large volumes of documentation of questionable accuracy which becomes obsolete shortly after it is finished
  • Push large batches of work to teams and special committees for approval and processing and then wait for responses

To meet compliance and reduce security risks, many organisations now include information security specialists as members of cross-functional product teams. Their role is to help the team identify what are the possible security threats and what level of controls will be required to reduce them to an acceptable level. They are consulted from the beginning and are engaged in all aspects of product delivery: Contributing to design for privacy and security Developing automated security tests that can be included in the deployment pipeline Pairing with developers and testers to help them understand how to prevent adding common vulnerabilities to the code base Automating the process of testing security patches to systems. As working members of the team, information security specialists help shorten feedback loops related to security, reduce overall security risks in the solution, improve collaboration and the knowledge of information security issues in other team members, and themselves learn more about the context of the code and the delivery practices.

Evolve Financial Management to Drive Product Innovation

In many large enterprises, financial management processes (FMPs) are designed around the project paradigm. This presents an obstacle to taking a product-based approach to innovation. It is relatively easy for small teams to work and collaborate amongst themselves. However, on an enterprise scale, we eventually reach a point where evolution is blocked by rigid, centralised FMPs that drive the delivery and procurement processes that limit the options for innovating at scale.

We consider the organisational financial management practices within enterprises that are typically identified as deterrents to innovation:

  • Basing business decisions on a centralised annual budget cycle, with exceptions considered only under extreme circumstances. This combines forecasting, planning, and monitoring into a single centralised process, performed once a year, which results in suboptimal output from each of these important activities.
  • Using the capability to hit budget targets as a key indicators of performance for individuals, teams, and the organisation as a whole, which merely tells you how well people play the process but not the outcomes they have achieved over the past year.
  • Basing business decisions on the financial reporting structure of capital versus operating expense. This limits the ability to innovate by starting with a minimal viable product that grows gradually or can be discarded at any time. The CapEx/OpEx model of reporting costs is largely based on physical assets and is project based; it does not translate well to the use of information to experiment, learn, and continually improve products over time.

However, in the context of product development, the traditional annual budget cycle can easily:

  • Reduce transparency into the actual costs of delivering value — costs are allocated by functional cost centers or by which bucket the money comes from, without an end-to-end product view.
  • Remove decisions from the people doing the work — the upper management establishes and mandates detailed targets.
  • Direct costs away from value creation by enforcing exhaustive processes for approving, tracking, and justifying costs.
  • Measure performance by the ability to please the boss or produce output — not by actual customer outcomes — by rewarding those who meet budget targets, no matter what the overall and long-range cost may be.

The great planning fallacy, evident in the centralised budget process, is that if we develop a detailed upfront financial plan for the upcoming year, it will simply happen — if we stick to the plan. The effort to develop these kinds of plans is a waste of time and resources, because product development is as much about discovery as it is about execution. Costs will change, new opportunities will arise, and some planned work will turn out not to generate the desired outcomes. In today’s world of globalisation, rapid technology growth, and increasing unpredictability it is foolish to think that accurate, precise plans are achievable or even desirable.

Activity-based accounting (or costing) allows us to allocate the total costs of services and activities to the business activity or product that drives those costs. It provides us with a better picture of the true financial value being delivered by the product.

However, this prevents us from paying attention to the most important questions: did we plan at the right level, set good targets, get more efficient, or improve customer satisfaction? Are our products improving or dying? Are we in a better financial position than we were before?

The traditional process also serves to obscure the true cost of ownership and escalates operating costs. A project will be fully capitalised, allowing us to spread out the reporting of that cost over an extended period, so it has less short-term impact on our profit. However, many of the items that are being capitalised during the initial project have an immediate negative impact on our OpEx, starting before or immediately after the project dissipates. The long-term operating costs required to support the increasing complexity of systems created by projects are not calculated when capitalised projects are approved (because they don’t come out of the same bucket). Ongoing support and retirement of products and services is an OpEx problem. In the end, OpEx teams are stuck with justifying their ever growing costs caused by the bloat and complexity created by CapEx decisions. If we are serious about innovation, it shouldn’t really matter which bucket funding comes from. Open, frank discussion, based on real evidence of the total end-to-end cost of the product, is what we should use as the basis of business decisions. Funding allocation of a product’s development into CapEx or OpEx should be performed by accountants after the business decisions are made.

The first mistake in the typical procurement process is thinking with large amounts of upfront planning, we can manage thge risk of getting something that doesnt delivery the expected value, normally completed using a request for proposal (RFP) which has several negative side effects:

  • Its a poor way to manage the risks of product development
  • It favours incumbents
  • It favours large service providers
  • It inhibits transparency
  • It is inaccurate
  • It ignores outcomes

The second mistake in the typical procurement process is that is assumes all services are equal in both the quality of the people working on the delivery and the quality of the software delivered.

Turn IT into a Competitive Advantage

High-performing IT organisations are able to achieve both high throughput, measured in terms of change lead time and deployment frequency, and high stability, measured as the time to restore service after an outage or an event that caused degraded quality of service. High-performing IT organisations also have 50% lower change fail rates than medium- and low-performing IT organisations.

The practices most highly correlated with high IT performance (increasing both throughput and stability) are:

  • Keeping systems configuration, application configuration, and application code in version control
  • Logging and monitoring systems that produce failure alerts
  • Developers breaking up large features into small, incremental changes that are merged into trunk daily
  • Developers and operations regularly achieving win/win outcomes when they interact

There are two other factors that strongly predict high performance in IT. The first is a high-trust organisational culture. The second is a lightweight peer-reviewed change approval process.

Instead of creating controls to compensate for pathological cultures, the solution is to create a culture in which people take responsibility for the consequences of their actions — in particular, customer outcomes. There is a simple but far-reaching prescription to enable this behaviour:

  1. You build it, you run it.
  2. Turn central IT into a product development organisation.
  3. Invest in reducing the complexity of existing systems. 

While moving to external cloud suppliers carries different risks compared to managing infrastructure in-house, many of the reasons commonly provided for creating a “private cloud” do not stand up to scrutiny. Leaders should treat objections citing cost and data security with skepticism: is it reasonable to suppose your company’s information security team will do a better job than Amazon, Microsoft, or Google, or that your organisation will be able to procure cheaper hardware?

When using COTS, it is crucial not to customise the packages. We can’t emphasise strongly enough the problems and risks associated with customising COTS. When organisations begin customising, it’s hard to stop — but customisations of COTS packages are extremely expensive to build and maintain over time. Once you get beyond a certain amount of customisation, the original vendor will often no longer support the package. Upgrading customised packages is incredibly painful, and it’s hard to make changes quickly and safely to a customised COTS system.

Start Where You Are

First, starting small with a cross-functional team and gradually growing the capability of the product, while delivering value iteratively and incrementally, is an extremely effective way to mitigate the risks of replacing high-visibility systems, while simultaneously growing a high-performance culture. It provides a faster return on investment, substantial cost savings, and happier employees and users. This is possible even in a complex, highly regulated environment such as the government. Second, instead of trying to replace existing systems and processes in a “big bang,” the GDS replaced them incrementally, choosing to start where they could most quickly deliver value. They took the “strangler application” pattern and used it to effect both architectural and organisational change. Third, the GDS pursued principle-based governance. The leadership team at GDS does not tell every person what to do but provides a set of guiding principles for people to make decisions aligned to the objectives of the organisation. The GDS governance principles state:

  1. Don’t slow down delivery.
  2. Decide, when needed, at the right level.
  3. Do it with the right people.
  4. Go see for yourself.
  5. Only do it if it adds value.
  6. Trust and verify.

People are trusted to make the best decisions in their context, but are accountable for those decisions — in terms of both the achieved outcomes and knowing when it is appropriate to involve others.




Top 8 Books for Technical Leaders

We’re always in the search to develop and maintain high performing teams, reduce risk, improve quality and increase speed to market. The below books have helped in the journey to become a better information technology / software professional and technical leader. The books discuss productivity, leading / managing knowledge workers in creative work, growing technical leadership skills, agile leadership, lean thinking, change & financial management.

Peopleware: Productive Projects and Teams – Tom DeMarco & Tim Lister

Peopleware - Productive Projects and Teams 

Peopleware is a great book for leaders of knowledge workers and creative environments. Through their experience and research, DeMarco and Lister provide examples on how to enable productive teams and describes common productivity killers. Peopleware discusses:

  • How to keep staff happy, retention high, burnout low.
  • How to setup your team environment for maximum productivity.
  • How to create a learning culture.
  • Setting up the office and work environment to maximize flow time and team work.
  • Leadership, management, goal alignment and networking.
  • Factors that will lead to teamicide – i.e. breaking teams.
  • Creating a culture of transparency and trust.
  • Empowering people to define methods most appropriate for their work, rather than strict adherence to prescribed methodologies.
  • Tips for effectively and pragmatically managing risk and change management.
  • How to run effective meetings, avoid ceremonies and ensure working meetings have outcomes and are efficient and effective.
  • How to grow community and culture, make work fun, enable innovation, keep motivation high and teams happy.

You can find a detailed Peopleware: Productive Project and Teams summary here.

You can review and purchase Peopleware: Productive Projects and Teams on Amazon.com.au.

 


The Manager’s Path: A Guide for Tech Leaders Navigating Growth & Change – Camille Fournier

The Managers Path

<<< Summary In Progress >>>

<<< Summary In Progress >>>

<<<Summary In Progress >>>

You can review and purchase The Managers Path: A Guide for Tech Leaders Navigating Growth and Change on Amazon.com.au.


Leading Geeks – How to Manage and Lead the People Who Deliver Technology – Paul Glen

Leading Geeks

<<< Summary In Progress >>>

<<< Summary In Progress >>>

<<< Summary In Progress >>>

 

You can review and purchase Leading Geeks – How to Manage and Lead the People Who Deliver Technology on Amazon.com.au.


Management 3.0 Leading Agile Developers, Developing Agile Leaders – Jurgen Appelo

Management 3.0

Management 3.0 is a book designed for the agile manager / leader and is presented via a synthesis of theory, science, nature and practical real world experience of Jurgen Appelo. Through understanding the differences between ordered systems (i.e. predicable) and complex systems (i.e. unpredictable), better structures, processes, methods and approaches can be used to manage, lead & motivate teams, reduce risk and increase the chance of success. Management 3.0 discusses:

  • The differences between ordered-systems and complex-systems and selecting the right method for your environment and challenge
  • Improving chances of success by embracing a constant flow of failure, learning & evolving cycles
  • Reviews existing agile methods such as RAD, Scrum, XP,  Lean, CMMI, PMBOK, Prince2, RUP and some of their limitations including the CHAOS report on project failure.
  • Discusses the role of complexity – the state between order & chaos where innovation & creativity thrives.
  • Highlights the importance of social networks within an organisation and osmotic communication (overhearing conversations & information). Connectivity of an individual & team is one of the best predictors of performance.
  • How to manage a creative environment including core tenants of safety, play, work variation, creative visibility,  and an environment which challenges the comfort zone.
  • How to create an environment of motivation for knowledge workers
  • Enabling self organising teams which are best suited for complex/dynamic environments, delegating and enabling decisions to be made at the right level (where the knowledge resides). Through aligning constraints, setting boundaries & protecting the environment, managers can define direction and shared goals of autonomous teams.
  • How to develop competence within individuals & teams and capture key project performance metrics for feedback loops
  • Communication and feedback ideas
  • Organisational structure, team size & makeup and generalising specialist / t-shaped individuals
  •  Embracing continuous change in search for system optimisation through adaption, exploration & anticipation.
  • Continuous improvement through plan – do – check – act cycle and similar models. Often as the team tries different methods to improve productivity it will take one step back and two steps forward.

You can find a detailed Management 3.0 summary here.

You can review and purchase Management 3.0 Leading Agile Developers, Developing Agile Leaders on Amazon.com.au.


The Lean Startup: How Constant Innovation Creates Radically Successful Businesses – Eric Ries

The Lean Startup

<<< Summary In Progress >>>

<<< Summary In Progress >>>

<<< Summary In Progress >>>

You can review and purchase The Lean Startup: How Constant Innovation Creates Radically Successful Businesses on Amazon.com.au.


Leading Change – John P. Kotter

Leading Change

<<< Summary In Progress >>>

<<< Summary In Progress >>>

<<< Summary In Progress >>>

You can review and purchase Leading Change on Amazon.com.au.


The Mythical Man-Month – Fred Brooks

The Mythical Man Month

<<< Summary In Progress >>>

<<< Summary In Progress >>>

<<< Summary In Progress >>>

You can review and purchase The Mythical Man-Month on Amazon.com.au.


Beyond Budgeting – Jeremy Hope

Beyond Budgeting

<<< Summary In Progress >>>

<<< Summary In Progress >>>

<<< Summary In Progress >>>

You can review and purchase Beyond Budgeting on Amazon.com.au.


 

I hope you’ve found the above books useful. Have I missed any books you think software / information technology professional and leaders should read?
Would love to hear your thoughts and feedback below…

 

* As an Amazon Associate I earn from qualifying purchases…



Principles Of Software Development

A set of guiding principles for software development, applying rule of thumb over strict governance.

P1:    Build in the simplest way possible (KIS).

P2:    Prefer working in smaller increments, build for fast feedback, refactor as necessary. Apply the rule of 3.

P3:    Be a commercial developer (consider build cost, support cost & total cost of ownership) and provide regular updates on progress.

P4:    Be flexible in your approach depending on problem at hand – prototype / spike / hack for early customer or technical feedback and build solid, testable, maintainable, clean & quality code once feature/concept proven.

P5:    Apply the Testing Pyramid approach to quality assurance

P6:    Pick the best tool / technology / approach for the job at hand. Consider optimising for the whole; globally rather than locally.

P7:    Apply 12 factor app design, with architecture emerging. Consider the *ilities, make trade-offs visible as shouldn’t necessarily design for all – see fitness function fit.

P8:   Collective (collaborative) code ownership – the sum of all experiences leads to better software.

P9:   Follow Robert C. Martin’s ’the boy scout rule’: leave the code better than you found it.

P10:    Follow the Agile Documentation Manifesto. Prefer working software over documentation.

P11:  Replace manual processes with automation – automate all the things, reduce waste, improve throughput.

P12:   Be disciplined – taking shortcuts / taking on technical debt can be an option short term, but left unpaid almost always leads to poor longer term outcomes; a reduction in team productivity, cost-effectiveness and increased risk.

P13:  Work at a sustainable pace, limit work-in-progress (WIP), stop starting and start finishing.

P14:  Design for failure (error driven design); consider all the things that could go wrong such as hardware failures, network failures, database failures, system slowness, upstream & downstream system failures, cancellations, time-outs, non-happy-day user flows etc.

 




Agile Documentation Manifesto

Agile documentation should:

  1. Keep it simple (KIS), keep it lean (KIL).
  2. Clear and unambiguous.
  3. Lightweight, dot points, single sentences, to the point.
  4. Consistent presentation with whitespace.
  5. Well designed, organized, structured.
  6. Prefer diagrams over words.
  7. Easily searched and navigated.
  8. Easily updateable.
  9. Just in time (JIT).
  10. Stable (dont document rapidly changing work).

Ideally documentation should convey information clearly, concisely, reduce information silos (i.e. knowledge only known by an individual/team) and reduce re-learning time across teams & organisations.

 

 


Reference Material

Extract from  Agile Modelling:

Agile documentation principles

  • The fundamental issue is communication, not documentation.
  • Agilists write documentation if that’s the best way to achieve the relevant goals, but there often proves to be better ways to achieve those goals than writing static documentation.
  • Document stable things, not speculative things.
  • Take an evolutionary approach to documentation development, seeking and then acting on feedback on a regular basis.
  • Prefer executable work products such as customer tests and developer tests over static work products such as plain old documentation (POD).
  • You should understand the total cost of ownership (TCO) for a document, and someone must explicitly choose to make that investment.
  • Documentation should be concise: overviews/roadmaps are generally preferred over detailed documentation.
  • Travel as light as you possibly can.
  • Documentation should be just barely good enough.
  • Comprehensive documentation does not ensure project success, in fact, it increases your chance of failure.
  • Models are not necessarily documents, and documents are not necessarily models.
  • Documentation is as much a part of the system as the source code.
  • The benefit of having documentation must be greater than the cost of creating and maintaining it.
  • Developers rarely trust the documentation, particularly detailed documentation because it’s usually out of sync with the code.
  • Ask whether you NEED the documentation, not whether you want it.
  • Create documentation only when you need it at the appropriate point in the life cycle.
  • Update documentation only when it hurts.

 

When is a document agile?

  • Agile documents maximize stakeholder ROI.  
  • Stakeholders know the TCO of the document.  
  • Agile documents are “lean and sufficient”.
  • Agile documents fulfill a purpose.  
  • Agile documents describe “good things to know”.
  • Agile documents have a specific customer and facilitate the work efforts of that customer.  
  • Agile documents are sufficiently accurate, consistent, and detailed.  

 

 




Software Delivery – Optimise for predictability or productivity?

This blog post was inspired by a recent work rant:

/Rant
It may be worth having a conversation around what a delivery plan is (and isn’t). Once the delivery-plan has been communicated, it will likely be out of date as we’re working in a unpredictable complex system (not an ordered predictable system). I hope we consider the delivery plan as an alignment tool which will constantly change as we learn, react & adapt (not as a stick for fixed dates/deliverables). If we need a guaranteed delivery plan and dates, then perhaps we need to use a different method of planning & delivery (perhaps Waterfall, with lots of buffer, contingency & lead times).
/End Rant

Often when delivering software we have two competing interests; being predictable in delivery (i.e. hitting targets / deadlines) vs. maximising productivity (delivering valuable output). For this article we’ll define predictability as ‘delivering software features on time, to budget and to an acceptable level of quality’ and define productivity as ‘maximising value-add features, maximising learning and minimising waste’. From experience these two concepts can be opposing and depending on the type of work at hand can land your initiative in a different place on the scale below in figure 1.

Figure 1. Predictability vs. Productivity in software delivery

Often the more predictable a team needs to be, the less actual value adding work (or validated learning) it will produce, and will often partake in poor processes and generate low-to-no value-add artefacts. My hypothesis is if two independent teams were to solve the same problem, within the same environment, with the same constraints and technical stacks etc (i.e. identical space), the team choosing to be more predictable would be between 10% to 40% less productive. The more predicable team would spend more time on upfront analysis, design, architecture, upfront spikes on unknown areas, more time breaking down work into detailed tasks, estimating & planning. They would then likely deliver in larger batches and have longer release cycles, but the team would hit delivery targets, budgets and be predictable – happy days (or so we think)!

I suspect the more productive team would be less predictable, most likely starting by completing a high level delivery plan upfront (i.e. a mud map), identify dependencies with long lead times (early on), look to attack hidden complexities through working software and be constantly evolving their delivery plans, budgets and forecasts. The more productive team would spend less time producing upfront artefacts such as business requirement documents (BRDs), solution architecture documents (SADs), detailed work breakdown structures, detailed delivery plans & budgets, but would plan to deliver the smallest vertical increment of working software, and continue to iterate and build based on rapid feedback from the environment. Valuable working software would provide the best feedback, documentation and risk reduction and any artefacts required such as architecture diagrams, user documentation etc would be produced just in time. The team would frequently evolve the delivery plan based on historic velocity with just enough detail to communicate dependencies, ensure alignment, communicate actual & forecasted dollar burn-rate and set/reset delivery expectations as value is realised and the ecosystem changes.

Methods of Software Delivery

The Waterfall method is an extreme example of large up-front effort with little to no early value and long lead times, Scrum introduced time-boxed, value adding increments and lands in the middle of Predictability vs. Productivity scale and Lean / Kanban method is an example of a fluid, flow based delivery method as seen in figure 2.

Figure 2. Schedule based vs Flow based software delivery methods

Extreme caution should be used when moving too much towards big up front delivery planning (such as the Waterfall method) where there is a need to try to understand and solve all problems at the start of a project, as highlighted in the 2015 Standish Chaos report smaller projects have a much higher chance of success, and in all project sizes Agile methods delivered success 39% of the time (challenged 52%, failed 9%), compared to Waterfall which delivered success 11% of the time (challenged 60%, failed 29%) . Ignoring Waterfall, the predictable team above may choose the Scrum methodology and assuming a 7 person delivery team, two week sprint and a 40 hour work week we see the following time spent on Scrum rituals per team member:

  • 15m daily standup
  • 4hr backlog grooming, story breakdown & estimation
  • 1hr sprint planning
  • 1hr showcase
  • 1hr retrospective

Total of: 9.5 hours per sprint or 12% of sprint time per team member
Total of : 66.5 hours of team time per sprint

Spending time breaking down the backlog into smaller backlog items, discussing visual and technical designs, teasing out complexity & dependencies, poker-planning and thinking about execution are very valuable activities which lead to higher confidence levels and better predictability. With a little bit more work relatively-estimating the delivery backlog, some historic velocity and some contingency built in, the predictable team can now forecast delivery in the 2-3 month window with a level of confidence.

The productive team may feel spending 12% of their time on rituals as heavy and would look to spend minimal time on overheads. As work comes in and is prioritised, the team would break work down into small (similar sized) items and tackle the next most important issue, continuously deploying value and seeking feedback. They would likely spend less time breaking down work and planning, and more time doing and assessing the results; more of a continual flow based method. Often the flow-based delivery teams are good at forecasting days-to-a-week of work, but not great at forecasting weeks-to-months of work, and are often less predictable than a typical Scrum team, but arguable more productive.

The culture of the productive team is likely to be more focused on being brave, attacking problems, compelled to action, taking calculated risks, rewarding failure & learning cycles, supporting each other and focused on getting things done whilst the predictable team will likely play it more safe, be more risk adverse, compelled to analysis and more focused on delivering to expectations rather than striving for stretch goals. The culture of the team and the broader ecosystem the team operates within is a major influencer on the teams ethos; to be predictable or to be productive.

The above example indicates both extremes of the predictability vs. productivity, there are many different considerations when choosing where to land on the scale and my hypothesis above assumes a certain context & problem; however many things needs to be considered such as:

The type of work at hand – is it well known, repeatable, complex or unknown?

Following the Cynefix framework, where does your work or ecosystem fit?

Are you operating in business as usual mode (BAU) of an established well understood application?

Are you building a new application from scratch and don’t fully understand the customer problem or domain?

Are you working on a pure innovation front, or with bleeding edge technology?

The team & individuals

How experienced are the team within the current ecosystem and technology?

How much career & technology experience does each team member have; graduate, junior, mid or senior?

How much cumulative experience does the team have (a team full of graduates, mid-tier developers or senior specialists or a good mix of all)?

Do you have an established, battle hardened team, or a newly forming team (going through Tuckman’s forming, storming, norming performing phases)?

The size of the feature, application or system

Are you building a small increment to an existing application or are you building an entire business support system?

Are you modifying a standalone feature, or a full customer or business workflow?

Are you building for a local market, or one that spans across language, time and culture?

The surrounding ecosystem of the feature, application or system

Are you working in a startup, trying to find customers, or in an established and highly regulated industry such as banking or insurance?

Do you have millions of customers on a legacy platform?

Do you have full freedom of ways of working to define your own processes, or are you bound to an existing corporate environment?

How much lead time does the business need for change management and go to market activities?

How many up-stream or down-stream dependencies do you have, and what are their lead times?

How easy is it to make a change in your ecosystem, how much effort is required to handle changes to an upfront plan?

The initiative, project or program of work in play

Are you working on a single, isolated system with limited dependencies or part of a complex, interconnected large ecosystem?

Are other teams and systems dependent on the work you’re producing to deliver on a program of work?

How many people are involved in the initiative, project or program of work – one team of 5 people, 15 teams and 150 people or hundreds of teams and thousands of people?

The lifespan of the feature, application or system

Following the lifespan of an feature, application or system by Kent Beck – Explore, Expand, Extract & Extinct

Is this feature a learning and innovative piece of work (i.e Explore) and needing extreme productivity and validated learning cycles?

Is this application or system scaling up and expanding (i.e Expand) and needing to overcome technical, system, process & people limitations?

Is this feature being delivered to an existing large customer base (i.e. Extract) where predictability and profitability are key drivers?

Is this feature, application or system being delivered at end of life (i.e. Extinct), hard to coordinate & expensive to change?

All problems are inherently different

Experience has taught me there’s often more than one way to solve a problem, each having a unique context & ecosystem; one size never fits all. Like most things in life, to move forward some trade-offs are likely required and most teams will find themselves somewhere in the middle of the scale, doing enough work to be predictable, without suffering significant productivity loses.

Where do you fit on the scale of predictability vs productivity?
What are your unique needs?
How fast do you want to go?
How predictable do you want to be?




Peopleware: Productive Projects and Teams summary

Authors Tom DeMarco & Tim Lister

Peopleware - Productive Projects and Teams 

1.Managing the Human Resource

  • Don’t manage to the software vending machine mentality – standard operating processes, telling a technical team how to do the job, hiding the team from the underlying problem, prescribing solutions, leaving no room for learning or creativity etc.
  • Allow and encourage teams to make errors and learn.
  • Work a sustainable 40 hour week, nobody can continually work overtime, or sprint from the start to the finish on a long project without slowing down, or worse, burning out. Creative workers require rest to be at their maximum productivity during their work hours, and great knowledge workers spend time outside work learning about their trade and applying learned excellence back to their work.
  • “Quality, far beyond that required by the end user, is a means to higher productivity.”
  • “Quality is free, but only to those who are willing to pay heavily for it.”
  • Employees who are experts at designing, developing and delivering the said work should be the people who estimate and commit to it; “Programmers seem to be a bit more productive after they’ve done the estimate themselves, compared to cases in which the manager did it without even consulting them. When the two did the estimating together, the results tended to fall in between.”

2. The Office Environment

  • “There are a million ways to lose a work day, but not even a single way to get one back.”
  • “While this [10 to 1] productivity differential among programmers is understandable, there is also a 10 to 1 difference in productivity among software organizations.”
  • A study on developer performance showed workplaces with larger dedicated personal workspaces which are quiet, private and interruption free performed much better and showed a reduced defect rate
  • A study of development showed how developers spend their time:
    • Working alone 30%
    • Working with one other person 50%
    • Working with two or more people 20%
  • Working alone requires Flow Time –  a near meditative, uninterrupted state where time flows unconsciously and developers are at their most productive – work output flows. It takes time to get into the zone (15+ minutes) which is mostly unproductive time and nothing kills Flow Time like being interrupted (by a colleague, phone, email, instant message, loud noise etc) or having a workplace which doesn’t understand and design for Flow Time.
  • There is a simple forumula to work out if you’re environment is setup for Flow Time called Environment Factor (EF). An EF of ~40+% should allow developers enough flow time and enable enough time to work with others and in groups:
    • Environment-Factor  = Uninterrupted Hours / Body-Present Hours

3. The Right People

  • Peopleware formula:
    • Get the right people.
    • Make them happy so they don’t want to leave.
    • Turn them loose.
  • Leadership as a service; leaders are enablers of knowledge workers who in large can self-manage. A leaders job should be to foster and grow teams, networks and aim to attain goal alignment. “The manager’s function is not to make people work, but to make it possible for people to work”.
  • Employee turnover is very expensive. It generally costs about 4.5 to 5 months of total employee cost to replace an employee and takes approx. 3-5 months for the employee to become fully productive.

4. Growing Productive Teams

  • A jelled team, aligned behind a common goal with momentum can be unstoppable. “The purpose of a team is not goal attainment but goal alignment”. A jelled team is usually has a strong sense of identity, a name, a brand, a catch phrase and a good, well known reputation.
  • Teamicide is a list of things that will likely prevent team jelling:
    • Defensive management
    • Bureaucracy
    • Physical separation
    • Fragmentation of people’s time
    • Quality reduction of the product
    • Phony deadlines
    • Clique control
    • Motivational posters, plaques & accessories
    • Overtime & weekend work
    • Internal team competition, performance reviews  (stack ranking, manage by objective etc), excessive personal praise & rewards
  • Overtime & weekend work – “That negative impact can be substantial: error, burnout, accelerated turnover, and compensatory undertime.”
  • Full transparency, delegation with trust, support, a safe environment where failing & learning is rewarded and open dialog helps teams and organisations gain great outcomes; to be properly effective an organisation has to have this at all levels. “This Open Kimono attitude is the exact opposite of defensive management. You take no steps to defend yourself from the people you’ve put into positions of trust. And all the people under you are in positions of trust. A person you can’t trust with any autonomy is of no use to you.”
  • Great teams build networks and this are not driven through hierarchies; “The structure of a team is a network, not a hierarchy. For all the deference paid to the concept of leadership (a cult word in our industry), it just doesn’t have much place here.”

5. Fertile Soil

  • Organisations tend to focus too much on certified methodologies rather than trusting its knowledge workers to setup systems and processes best placed for their work at hand;  “There is a big difference between Methodology and methodology. Small m methodology is a basic approach one takes to getting a job done. It doesn’t reside in a fat book, but rather inside the heads of the people carrying out the work. Big M Methodology is an attempt to centralize thinking. All meaningful decisions are made by the Methodology builders, not by the staff assigned to do the work.”
  • “Voluminous documentation is part of the problem, not part of the solution.” Big M methodologies often lead to:
    • A morass of paperwork
    • A paucity of methods
    • An absence of responsibility
    • A general loss of motivation
  • Better ways to achieve convergence of method are:
    • Training
    • Tools
    • Peer Review
  • Risk management within organisations is often seen in two extremes; non-managed, or managed so strongly risk adverse as to accomplish nothing of greats, transformational value; “The Peopleware premise—our main problems are more likely to be socio- logical than technological in nature—applies nowhere more strongly than in the area of risk”. “The risk we tend not to manage is the risk of our own failure.”
  • “The ultimate management sin is wasting people’s time.”
  • Starting projects with a full team – early overstaffing, rather than a slow ramp up during the start of a project planning & design phase most often wastes time and money
  • The cost of creating and consuming email should not be underestimated. Where possible avoid sending out corporate spam and grow a self-organising and coordinating culture without needing email as a centralised coordinating function.
  • “Experience gets turned into learning when an organization alters itself to take account of what experience has shown. Learning is limited by an organization’s ability to keep its people.”
  • Middle management is often the first to get downsized and can often have a direct and significant impact on organisational memory and the ability for an organisation to learn – “successful learning organizations are always characterized by strong middle management.”. To maximise organisation learning, middle management must work closely with each other in effective harmony, avoid bureaucracy, silos, have common aligned goals, have clear communication lines and frequent interaction. Ensuring management are operating as an effective team is critical.
  • Aristotle’s five interlinked Noble Sciences that together make up Philosophy:
    • Metaphysics: the study of existence, the nature of the universe and all its contents
    • Logic: the ways we may know something, the set of permissible conclusions we may draw based on our perceptions, and some sensible rules of deduction and inference
    • Ethics: what we know about man and what we may deduce and infer (through Logic) about acceptable interactions between pairs of individuals
    • Politics: how we may logically extend Ethics to the larger group – humans and the community made up of humans
    • Aesthetics: the appreciation of symbols and images of metaphysical reality
  • Fostering and environment where community and culture grows is one of the most important roles of managers and leaders

Meetings

  • Some organisations have a culture of meetings, which are considered more important than work and other organisations have an extreme no-meeting culture – you need to meet somewhere in the middle. As organisation age, their meetings tend to get more frequent, and longer.
  • Some dysfunctions of a meeting:
    • People in attendance but not present (ie using technology) or engaged – perhaps because they’re not getting value
    • Inviting more people than needs to be present to make a decision. Fewer people the better and meeting costs should be calculated – “The cost of the meeting is directly proportional to the number attending.”.
    • “A meeting that is ended by a clock is a ceremony”. A meeting where no decisions are necessarily made, and where most of the conversation is conducted between two-people (ie the boss and round-robin people speaking) with other attendees siting idle can be considered a status-update / FYI meeting and considered a waste of peoples time – can be replaced by one-on-ones.
  • The very nature of working meetings are ad-hoc and called when necessarily to reach a decision; a frequent, reoccurring meeting is normally a status meeting – “The need that was being served was not the boss’s need for information, but for reassurance”.
  • Start each meeting with an outcome in mind and the question – “What ends this meeting?”. Once the meeting has achieved its goal, end the meeting promptly.

Change Management

  • The hardest pert of change management is dealing with people (not necessarily technology) – “The fundamental response to change is not logical, but emotional.”
  • Different personas to change (increasing in resistance):
    • Blindly Loyal (Ask no questions.)
    • Believers but Questioners:
      1. Skeptics (“Show me.”)
      2. Passive Observers (“What’s in it for me?”)
      3. Opposed (Fear of Change)
      4. Opposed (Fear of Loss of Power)
    • Militantly Opposed (Will Undermine and Destroy)
  • Celebrate and acknowledge our old ways of working as enabling this new change
  • “You can never improve if you can’t change at all.”
  • Change involves are the very least 4 stages and two key events:
    • Old Status Quo –> [foreign element / event, catalyst for change]
    • Chaos –> [transforming idea / event]
    • Practice and Integration
    • New Status Quo
  • Often with new changes you’ll go through a learning curve and dip in performance, before mastering the new ways and (hopefully) improving
  • “Change won’t even get started unless people feel safe—and people feel safe when they know they will not be demeaned or degraded for proposing a change, or trying to get through one.”

6. It’s Supposed to Be Fun to Work Here

  • Work should be fun, introducing some chaos into the mix can help with empowerment, ownership, innovation, boosting productivity, enable team-work, help with change management and introduce novelty. It can be done via:
    • Pilot projects
    • War games
    • Brainstorming
    • Provocative training experiences
    • Hack-days / Hackathons
    • Training, trips, conferences, celebrations, and retreats
  • When brainstorming, encourage quantity over quality – sometimes the sillier the idea the better. As idea generation slows, you can try the following strategies:
    • Analogy thinking (How does nature solve this or some similar problem?)
    • Inversion (How might we achieve the opposite of our goal?)
    • Immersion (How might you project yourself into the problem?)

You can review and purchase Peopleware: Productive Projects and Teams on Amazon.com.au.