Blog


Software Delivery – Optimise for predictability or productivity?

This blog post was inspired by a recent work rant:

/Rant
It may be worth having a conversation around what a delivery plan is (and isn’t). Once the delivery-plan has been communicated, it will likely be out of date as we’re working in a unpredictable complex system (not an ordered predictable system). I hope we consider the delivery plan as an alignment tool which will constantly change as we learn, react & adapt (not as a stick for fixed dates/deliverables). If we need a guaranteed delivery plan and dates, then perhaps we need to use a different method of planning & delivery (perhaps Waterfall, with lots of buffer, contingency & lead times).
/End Rant

Often when delivering software we have two competing interests; being predictable in delivery (i.e. hitting targets / deadlines) vs. maximising productivity (delivering valuable output). For this article we’ll define predictability as ‘delivering software features on time, to budget and to an acceptable level of quality’ and define productivity as ‘maximising value-add features, maximising learning and minimising waste’. From experience these two concepts can be opposing and depending on the type of work at hand can land your initiative in a different place on the scale below in figure 1.

Figure 1. Predictability vs. Productivity in software delivery

Often the more predictable a team needs to be, the less actual value adding work (or validated learning) it will produce, and will often partake in poor processes and generate low-to-no value-add artefacts. My hypothesis is if two independent teams were to solve the same problem, within the same environment, with the same constraints and technical stacks etc (i.e. identical space), the team choosing to be more predictable would be between 10% to 40% less productive. The more predicable team would spend more time on upfront analysis, design, architecture, upfront spikes on unknown areas, more time breaking down work into detailed tasks, estimating & planning. They would then likely deliver in larger batches and have longer release cycles, but the team would hit delivery targets, budgets and be predictable – happy days (or so we think)!

I suspect the more productive team would be less predictable, most likely starting by completing a high level delivery plan upfront (i.e. a mud map), identify dependencies with long lead times (early on), look to attack hidden complexities through working software and be constantly evolving their delivery plans, budgets and forecasts. The more productive team would spend less time producing upfront artefacts such as business requirement documents (BRDs), solution architecture documents (SADs), detailed work breakdown structures, detailed delivery plans & budgets, but would plan to deliver the smallest vertical increment of working software, and continue to iterate and build based on rapid feedback from the environment. Valuable working software would provide the best feedback, documentation and risk reduction and any artefacts required such as architecture diagrams, user documentation etc would be produced just in time. The team would frequently evolve the delivery plan based on historic velocity with just enough detail to communicate dependencies, ensure alignment, communicate actual & forecasted dollar burn-rate and set/reset delivery expectations as value is realised and the ecosystem changes.

Methods of Software Delivery

The Waterfall method is an extreme example of large up-front effort with little to no early value and long lead times, Scrum introduced time-boxed, value adding increments and lands in the middle of Predictability vs. Productivity scale and Lean / Kanban method is an example of a fluid, flow based delivery method as seen in figure 2.

Figure 2. Schedule based vs Flow based software delivery methods

Extreme caution should be used when moving too much towards big up front delivery planning (such as the Waterfall method) where there is a need to try to understand and solve all problems at the start of a project, as highlighted in the 2015 Standish Chaos report smaller projects have a much higher chance of success, and in all project sizes Agile methods delivered success 39% of the time (challenged 52%, failed 9%), compared to Waterfall which delivered success 11% of the time (challenged 60%, failed 29%) . Ignoring Waterfall, the predictable team above may choose the Scrum methodology and assuming a 7 person delivery team, two week sprint and a 40 hour work week we see the following time spent on Scrum rituals per team member:

  • 15m daily standup
  • 4hr backlog grooming, story breakdown & estimation
  • 1hr sprint planning
  • 1hr showcase
  • 1hr retrospective

Total of: 9.5 hours per sprint or 12% of sprint time per team member
Total of : 66.5 hours of team time per sprint

Spending time breaking down the backlog into smaller backlog items, discussing visual and technical designs, teasing out complexity & dependencies, poker-planning and thinking about execution are very valuable activities which lead to higher confidence levels and better predictability. With a little bit more work relatively-estimating the delivery backlog, some historic velocity and some contingency built in, the predictable team can now forecast delivery in the 2-3 month window with a level of confidence.

The productive team may feel spending 12% of their time on rituals as heavy and would look to spend minimal time on overheads. As work comes in and is prioritised, the team would break work down into small (similar sized) items and tackle the next most important issue, continuously deploying value and seeking feedback. They would likely spend less time breaking down work and planning, and more time doing and assessing the results; more of a continual flow based method. Often the flow-based delivery teams are good at forecasting days-to-a-week of work, but not great at forecasting weeks-to-months of work, and are often less predictable than a typical Scrum team, but arguable more productive.

The culture of the productive team is likely to be more focused on being brave, attacking problems, compelled to action, taking calculated risks, rewarding failure & learning cycles, supporting each other and focused on getting things done whilst the predictable team will likely play it more safe, be more risk adverse, compelled to analysis and more focused on delivering to expectations rather than striving for stretch goals. The culture of the team and the broader ecosystem the team operates within is a major influencer on the teams ethos; to be predictable or to be productive.

The above example indicates both extremes of the predictability vs. productivity, there are many different considerations when choosing where to land on the scale and my hypothesis above assumes a certain context & problem; however many things needs to be considered such as:

The type of work at hand – is it well known, repeatable, complex or unknown?

Following the Cynefix framework, where does your work or ecosystem fit?

Are you operating in business as usual mode (BAU) of an established well understood application?

Are you building a new application from scratch and don’t fully understand the customer problem or domain?

Are you working on a pure innovation front, or with bleeding edge technology?

The team & individuals

How experienced are the team within the current ecosystem and technology?

How much career & technology experience does each team member have; graduate, junior, mid or senior?

How much cumulative experience does the team have (a team full of graduates, mid-tier developers or senior specialists or a good mix of all)?

Do you have an established, battle hardened team, or a newly forming team (going through Tuckman’s forming, storming, norming performing phases)?

The size of the feature, application or system

Are you building a small increment to an existing application or are you building an entire business support system?

Are you modifying a standalone feature, or a full customer or business workflow?

Are you building for a local market, or one that spans across language, time and culture?

The surrounding ecosystem of the feature, application or system

Are you working in a startup, trying to find customers, or in an established and highly regulated industry such as banking or insurance?

Do you have millions of customers on a legacy platform?

Do you have full freedom of ways of working to define your own processes, or are you bound to an existing corporate environment?

How much lead time does the business need for change management and go to market activities?

How many up-stream or down-stream dependencies do you have, and what are their lead times?

How easy is it to make a change in your ecosystem, how much effort is required to handle changes to an upfront plan?

The initiative, project or program of work in play

Are you working on a single, isolated system with limited dependencies or part of a complex, interconnected large ecosystem?

Are other teams and systems dependent on the work you’re producing to deliver on a program of work?

How many people are involved in the initiative, project or program of work – one team of 5 people, 15 teams and 150 people or hundreds of teams and thousands of people?

The lifespan of the feature, application or system

Following the lifespan of an feature, application or system by Kent Beck – Explore, Expand, Extract & Extinct

Is this feature a learning and innovative piece of work (i.e Explore) and needing extreme productivity and validated learning cycles?

Is this application or system scaling up and expanding (i.e Expand) and needing to overcome technical, system, process & people limitations?

Is this feature being delivered to an existing large customer base (i.e. Extract) where predictability and profitability are key drivers?

Is this feature, application or system being delivered at end of life (i.e. Extinct), hard to coordinate & expensive to change?

All problems are inherently different

Experience has taught me there’s often more than one way to solve a problem, each having a unique context & ecosystem; one size never fits all. Like most things in life, to move forward some trade-offs are likely required and most teams will find themselves somewhere in the middle of the scale, doing enough work to be predictable, without suffering significant productivity loses.

Where do you fit on the scale of predictability vs productivity?
What are your unique needs?
How fast do you want to go?
How predictable do you want to be?




Basecamp Shape Up Product Development Summary

Ryan Singer has documented Basecamp product development and delivery methodology in the ebook Shape Up – Stop Running in Circles and Ship Work that Matters. Shape Up describes Basecamps process of taking raw ideas, working through a shaping process to narrow to a core problem, remove any unknowns / risks / deep rabbit holes, add project boundaries, prefer appetite over estimates, create a pitch, bet on the work with a six week build cycle, and handover the work to a small, empowered build team to discover the work through doing, building scopes of work, communicating progress through Hill Charts, use scope hammering and working in small vertical slices in a continuous delivery mindset, attacking the most unknown / riskiest work early in the six week product development cycle.

Key Messages:

  • Use a shaping process to define & validate a problem, to address any unknowns or risks
  • Focusing on appetite instead of estimates
  • Prefer bets, not backlogs
  • Bet on a six week cycle with a circuit breaker at the end
  • Small empowered teams owning cycle outcomes
  • Deliver small vertical slices of the problem space
  • Build has two distinct phases – discover and validate (uphill), and known / execute (downhill)
  • Scope will grow as the delivery team learns more about the space, continuously hammer scope to deliver on six week commitment / cycle.

Please see below my notes / snippets (copied from Shape Up ebook without permission), you can download a free copy of Shape Up from basecamp: https://basecamp.com/shapeup/shape-up.pdf.

Notes

Six week cycles

  • Strict time box, acts as a circuit breaker, by default no extension.

Shaping the work

  • Senior group works in parallel to cycle team – focused on appetite (how much time do we want to spend)

Team fully responsible

  • Making hard calls to finish the project (cycle) on time

Targeting risk

  • The risk of not shipping on time. Solving open questions before we commit to a cycle. Build vertical deliverables, integrate early & continuously, sequence most unknown work first.

Part 1 Shaping

Wireframes are too concrete (give designers no room / creativity – design anchored)

Words are too abstract (solving a problem with no context, hard to understand scope & make tradeoffs)

Good Shaping:

  • Its rough
  • Its solved
  • Its bounded

Shaping is kept private (closed door) until commitment to a cycle is made

Two work tracks (cycles)

  • -> One for shaping
  • -> One for building

6 week cycles, shaping leads into building:

Shaping 1 | Shaping 2
————-> Building 1 | Shaping 3
————————–>Building 2 | Shaping 4
—————————————> | Building 3
—————————————————->  | Building 4

 

Appetite

  • Small batch (one designer, one or two programmers for one or two weeks)
  • Big batch (same team size, 6 weeks)

Fixed time, variable scope: An appetite is completely different from an estimate. Estimates start with a design and end with a number. Appetites start with a number and end with a design

Analyse a customer problem – we asked when she wanted a calendar. What was she doing when the thought occurred to ask for it?

Breadboarding

Use words, or fat-marker visuals to achieve.

Iterate on original idea

Fat marker visuals

  • Avoid getting into a high level of fidelity, or into the weeds

Do stress-testing and de-risking (find deep holes and challenges which could hinder)

The Pitch

Prefer pitches to be asynchronous communication – i.e. give people time to review offline in their own time, only escalate to real-time when necessary (i.e. meeting with key stakeholders) and give notice in advance

People review pitch and add comments (i.e to poke holes / ask questions – not to say no to the pitch, that’s for the betting table)

Part 2 Betting

Bets, not backlogs – big backlogs are a big weight we don’t need to carry. Backlogs are big time wasters – constantly reviewing, grooming and organising.Each 6 week cycle, a betting table is held where stakeholders decide what to do in the next cycle – containing a few well shaped, risk reduced options; the pitches are potential bets

  • If a pitch was great, but the time wasn’t right (there is no backlog), individuals can track the pitch independently and lobby for it again six weeks later

Its easy to overvalue ideas – in truth ideas are cheap – don’t stockpile or backlog. Really important ideas will come back to you.

6 Week Cycle

Cool Down

After every 6 week cycle, we schedule two weeks for cool down. This gives leaders enough time to breath, meet and consider what to do next and programmers and designers are free to work on whatever they want (i.e. fix bugs, explore new ideas, try out new technical possibilities).

Project teams consist of one designer & two programmers or one designer & one programmer (normally). A team spending an entire 6 week cycle is called the big batch team, and the team working on small projects (1-2 weeks) is called the small batch team.

The output of the betting meeting is called a cycle plan.

The cycle plan is a bet with a potential payout at the end of the cycle.

Cycles are dedicated commitments – uninterrupted time for the team to focus. The team cant be pulled away to work on something else. When you make a bet, you honour it.

  • “When you pull someone away for one day to fix a big or help a different team, you dont just lose a day. You lose the momentum they built up and the time it will take to gain it back. Losing the wrong. Hour can kill a day. Losing a day can kill a week.”

We only plan one cycle ahead, and can always introduce critical work in the next cycle. If it’s a real crisis, we can always hit the breaks – by true crises are very rare.

Having a fixed 6 week cycle without any potential for increased time acts as a circuit breaker, preventing runaway projects and projects which overload the system. If a project doesn’t finish in the six weeks, normally means a problem occurred in the shaping phase – perhaps time to reframe the problem. “A hard deadline and the chance of not shipping motivates the team to regularly question how their designs and implementation decisions are affecting scope”

What about bugs: Unless its a P1/P2 (i.e. a crises) they don’t naturally get priority over existing planned work, they can wait. This is how we address bugs:

  1. Use cool-down period
  2. Bring it to the betting table
  3. Schedule a bug smash (once a year, usually around holidays – a whole dedicated cycle to fixing bugs)

For projects larger than a 6 week cycle, we shape them (break them down) into 6 week cycle and only bet 6 weeks at a time.

Place Your Bets

  • Depending on whether we’re improving an existing product or building a new product, were going to set different expectations about what happens during the six week cycle.
    • Existing Products – Shape the work, bet on it, build.
    • New Products – broken into three phases:
      • 1. R&D mode: Learn what we want by building it (time boxed spikes, learn by doing), no expectation of shipping anything.
      • 2. Production mode: Shape the work, bet on it, build. Shipping is the goal (merging to main codebase), however not necessarily to end customers yet so we maintain the option to remove features from the final cut before launch
      • 3. Cleanup mode: A free for all, reserved capacity to finish things, or address things forgotten, bugs etc, no shaping, no clear team boundaries with work shipped continuously in as small bites as possible. Leadership make “final cut” decisions with cleanup not lasting longer than two cycles.

Examples

Betting table questions & debates

  • Does the problem matter?
  • Weighing up problems (options) against each other
  • Can we narrow down the problem (Pareto – 80% of the benefit from 20% of the change)
  • Is the appetite right (do we want to spend $xxx / weeks / cycles on this problem)?
  • Is the solution attractive?
  • Is it the right time?
  • Are the right people available?

After the betting table has finished, a kick-off message is posted on which projects we’re betting for the next cycle and who will be working on them

Part 3 Building

Assign projects, not tasks. Nobody plays the role of “taskmaster” or the “architect” who splits the project up into pieces for other people to execute.

Team defines their own tasks and work within the boundaries of the pitch.

Team have full autonomy and can use their judgement to execute.

Done means deployed. All QA needs to happen within the cycle.

A Basecamp project is created, chat channel and kickoff call.

First two – three days is radio silence from the team as they dive deep into the details and get aquatinted with the problem.

Team starts of with an imagined tasks, and through discovery learn about the real tasks to complete. Teams discover by doing the real work.

Integrate one slice

Pick a small slice of the project (ie design, backend & front end coding) to deliver end to end to show progress and gain feedback

Start in the middle

Start at the core of the problem (ie core screen and adding data to a database) and stub everything else out, rather than at the entry point (i.e. logging in). When choosing what to build first:

  • It should be core
  • It should be small
  • It should be novel (things you’ve never done before, address risk / uncertainty)

Organise by structure, not by person

Allow teams to self organise around a problem, understand the scope, form a mental image, breaking down into parts that are no longer than 1-2 days effort – a series of mini scopes.

Scopes become the natural language of the project at the macro level. It will help communicate status and progress.

Scoping happens over time as the team learns (not necessarily all up front); You need to walk the territory before you can draw the map. Scopes need to be discovered by doing the real work; identifying imagined vs discovered tasks and seeing how things connect (and don’t connect).

How do you know if you have scoped right?

Usually takes a few weeks to get a clear understanding of scope

A typical software project is split into cake layers (front end & backend work, thin slices). Watch out for icebergs, which can see a lot more back end or a lot more front-end work; look to simplify, reduce the problem and/or split these into seperate projects.

There will always be things that don’t fit into specific scope buckets, we refer to these tasks a chowder.

Mark tasks which are nice to have with a leading ~ to identify nice-to-haves (to sort out from must-haves).

Show Progress

We have to be cautious if big up front plans – imagined tasks (in theory) vs. real tasks (in practice).

As the project progress, to-do lists actually grows as the team makes progress (making it very hard to report progress of an imagined up front plan).

The problem with estimates is they don’t show uncertainty (or confidence level).

  • If you have two tasks, both estimated to take four hours:
    • the team has done task 1 many times and you can be confident in the estimate
    • The team has never done task 2 or it has unclear interdependencies (lots of unknowns) is uncertain and a low confidence estimate

We came up with a way to see the status of a project without counting tasks and without numerical estimates – by shifting the focus from what’s done or not done to what’s unknown and what’s solved. We use the metaphor of the hill.

The uphill phase is full of uncertainty, unknowns and problem solving (ie discovery). The downhill phase is marked by certainty, confidence, seeing everything and knowing what to do.

We can combine the hill metaphor with the scopes to plot each one as a different colour on the hill.

A dot that doesn’t move over time is a red-flag, someone might be stuck and need help (the Hill Chart identifies this without someone needing to say “I dont know / I need help”). Changes languages and enables managers to help by asking “what can we solve to get that over the hill?”. A non-moving dot can also indicate work is progressing well, but scope is significantly increasing with discovery, the team can break scope apart into smaller scope or redefine / reshape the problem.

Sometimes tasks backslide, which often happens when someone did the uphill work (i.e. discovery) with their head (i.e. imagined) instead of their hands (practice). Uphill can be broken into three tasks:

  1. “I’ve thought about this”
  2. “Ive validated my approach”
  3. “I’m far enough with what I’ve build that I don’t believe there are other unknowns”

Teams should attack the scariest / riskiest scope first within the cycle (given more time to unknown tasks and less time to known tasks).

Journalists have a concept called the inverted pyramid, their articles start with the most essential information at the top and add details and background information in decreasing order of importance. Teams should plan their work this way too.

Deciding when to stop

There’s always more work than time. Shipping on time means shipping something that’s imperfect.

Pride in work is important for quality and morale, but we need to direct it at the right target. If we aim for perfect design we’l never get there, at the same time we dont want to lower our standard.

Instead of comparing up to an ideal, compare down to a baseline – seeing work as being better than what customers have today – “its not perfect, but it works and is an improvement”.

Limits motivate trade-offs, with a hard six week circuit breaker forces teams to make trade-offs.

Cutting scope isn’t lowering quality. Makes choices makes the product better, it differentiates the product (better at some things).

Scope Hammering

Quality Assurance

Base camp (for millions of customers), have one QA person. The designers and programmers take responsibility for the basic quality of their work and the QA person comes in towards the end of the cycle and hunts for edge cases outside the core functionality. Programmers write their own tests and team works together to ensure the project does what it should according to what was shaped.

We think of QA as a level up, not as a gate or check point.

The team can ship without waiting for a code-review, there’s no formal checkpoint. But code review makes things better, so if there’s time, it make sense.

When to extend a project

In rare cases we’ll allow a project to run past its deadline / cycle and use the following criteria:

  • The outstanding tasks must be “must haves”
  • The outstanding tasks must be all “down hill” – no unsolved problems, no open questions.

The cool down period can be used to finish a project, however team needs to be disciplined and this shouldn’t become a habit (points to a problem with shaping or team performance).

Move On

Shipping and going live can generate new work – through customer feedback, defects and new feature requests.

With customer feedback, treat as new raw ideas which need to go through the shaping process.

  • Use a gentle no (push back) with customers until ideas are shaped and problem verified. If you say yes to customer requests, it can take away your freedom in the future (like taking on debt).

Feedback needs to be shaped.

Summary

As base camp has scaled to 50 people, we’ve been able to specialise roles:

  • Product team (12)
  • Specialised roles of designers and programmers
  • Dedicated Security, Infrastructure & Performance (SIP) handles technical work, lower in stack and more structural
  • Ops team (keep the lights on)
  • Support team

With this structure, we don’t need to interrupt the designers and programmers on our core product team working on shaped projects within cycles.




Benjamin Graham – The Intelligent Investor Summary

Graham focuses on Value Investing. According to Warren Buffet it’s “the best book ever written about investing”.

Book review (by Swedish Investor): https://www.youtube.com/watch?v=npoyc_X5zO8
Book Review (by Financial Freedom): https://www.youtube.com/watch?v=18r2RCVtqTg

Speculation vs. Investment

  • Thorough fundamental analysis in the companies in which you are investing in to promise safety of the principle and adequate return.
  • Protect your assets via diversification
  • Seek stable companies with steady returns
  • Always seek a margin of safety

Develop an understanding of inflation and its impact on your wealth. If inflation is 2.5%, and your bonds / investments are returning 2%, inflation is causing you to lose money each year (0.5%).

Meet Mr. Market. Mr Market is not always rational (often either too optimistic or too pessimistic – bipolar in nature)

  • Be happy to sell when prices are ridiculously high
  • Be happy to buy when Mr Market offers you a bargain

A stock is an ownership interest in a business.

The underlying value of a company does not often equal the price (someone is willing to pay for it).

  • A great company isn’t a great investment if you pay too much for the stock
  • The bigger the firm gets, the slower its growth rate becomes
  • Always be on the lookout for temporary unpopularity; allowing you to buy a great company at a great price

Two types of investors

1. Defensive (passive investor)

    • Portfolio of:
      • 50% stocks (max 75%)  & 50% bonds (min 25%)
        • Rebalance yearly
        • Invest regularly via Dollar cost averaging method
      • Diversify (10 to 30 companies, don’t over expose to certain industries)
      • Invest in only:
        • Large companies > $700m
        • Companies which are conservatively financed (assets are 200% its liabilities – 2 times)
        • Have paid dividends over the last 20 years
        • Have shown profit over the last 10 years (no earnings deficit)
        • At least 33% earnings growth over the last 10 years (equates roughly 2.9% growth annually)
        • Buy company cheap – Market cap less than 1.5 times its net asset value ( i.e.: market cap < (asset – liabilities) x 1.5
        • By cheap earnings – Price to earning ratio (P/E) of less than 15 (i.e P/E < 15)

2. Enterprising (active investor)

  • Need to invest a lot of time, be eager to learn, have patience and discipline
  • In general, avoid growth stocks as the value is based on ‘future earnings’ (may not materialise), rather than looking at a company’s current valuation.
  • If you can find a company where its price is less than its net working capital – you essentially purchase all its fixed assets for nothing
    • Net working capital = current assets – liabilities
  • Portfolio of:
    • Higher returning / higher risk assets
    • Some diversification
    • Invest in any size companies
    • Companies which are less conservatively financed (assets are 150% its liabilities – 1.5 times)
    • Paid a dividend in the last year
    • Growth > 0% (don’t worry about deficits as much)
    • Buy company cheap with tangible assets – Market cap less than 1.2 times its net asset value ( i.e.: market cap < (asset – liabilities) x 1.2

Key Concepts

Stock Valuation Concepts

  • Stock valuation is an art
  • Stock Valuation = Past and Current Numbers + Future Narrative
  • Stock valuation is a range, not an absolute (as its based on assumptions)
    • Plan different scenarios (ie head winds, tail winds etc) and come up with different scenarios and value forecasts (ie ranges)

How to determine Value

Value can be determined by:
  • (Original) Value = earnings per share x (8.5 + 2 x expected annual growth rate)
  • (Updated) Value = ( earnings per share x (8.5 + 2 x expected annual growth rate) x 4.4 ) / current yield on AAA rated corporate bond
  • Grahams value calculation
  • https://en.wikipedia.org/wiki/Benjamin_Graham_formula

Insist on a Margin of Safety (including how to determine value)

  • Mitigates the risk of being wrong (downside protection)
  • Don’t ever lose money
  • When the price is less than two thirds the value, you have a safety margin
    • Price < 2/3 of value (33% safety margin)
  • Graham looks for a 33% safety margin

Insist on Moats

Risk & Reward are not always correlated

  • You don’t have to take a higher risk to achieve a higher reward
  • By committing to deep and time consuming analysis, by exercising maximum intelligence and skill you can find valuable companies to invest in (with low risk)



Software Delivery Estimate Guideline

I have used slightly differing versions of the below to outline what should be included in an estimate, please consider each business environment, team and delivery process will have differing contexts (i.e know your context – KYC). For an overview of different estimation methods and templates please my Software Development & Delivery Estimation article. Depending on the phase of the initiative / project (pre-discovery, discovery, delivery), delivery methodology and type of work at hand (ie small agile feature, initiative, or large project) will determine which of the below estimate method to apply.

Generally there are two types of estimates:

  • High-Level – i.e very early on, not much context, quick sizing based on little information
  • Detailed – i.e close to delivery, team involved, more detail, agile point based estimation

Our delivery estimates consider a person day to be 8 hours, however when scheduling work (assuming working outside a flow based agile delivery model here) we should consider a person day of 6-7 hours (not 8) which should cover non-delivery work such as meetings, lunch, learning, training, discovery, estimation, production support, backlog grooming, story breakdown sessions, unplanned leave etc. If working in an agile method, velocity based planning will naturally take care of these items.

What should be included in an estimate

  • Analysis effort
  • Technical design effort
  • Development effort
  • Unit testing
  • Manual testing of solution
  • Test automation (including functional, integration/API, E2E and smoke tests)
  • Refactoring tasks (if possible / agreed)
  • Story kickoffs, code/peer reviews and walkthroughs
  • Build pipelines, environment setup & configuration and deployment infrastructure
  • Defect fixing in system testing, end to end testing & user acceptance testing phases
  • Deployments through lower environments to production
  • Feature toggling / rollout strategy in production and support
  • Monitoring & alerting tasks
  • Production defect fixing
  • Documentation

What should not be included in an estimate

  • Buffer, fat or contingency (this will be added at an initiative / project level when putting together estimates to share and in delivery plan). We want to avoid layers of buffer/contingency at task, feature, initiative and program level etc.
  • Spikes / prototypes / proof of concepts – these should be estimated / played as a seperate time-boxed task (ideally to inform your estimation)
  • Formal UAT Support – scheduled as a seperate task in delivery plan on large projects
  • Formal Warranty period / Hyper-care – scheduled as a seperate task in delivery plan on large projects
  • Customer meetings
  • General development activities outside this feature
  • Learning time, training, guilds, conferences etc



My view on anthropogenic climate change

I come from a different background than most, growing up on a dairy farm and managing the business for the last 15 years, being very aware of each years weather patterns; drought, normal years, high rainfall years, seasonal weather patterns (and seemly unseasonal weather patterns / cycles) with water, fodder and grain shortages & surpluses – all significantly impact the business bottom line. We have migrated to a seasonal calving pattern (calving mid-July to end October) and Ive managed through seasons where our season ends in November – it simply stops raining and we have 9 months of dry low rainfall weather to seasons like this year (19/20) where its rained all the way to March (and hopefully onwards). I’ve seen no Autumn breaks, where it doesn’t rain from January to June, and great Autumn breaks (like this year, 19/20) where it doesn’t stop raining.  Ive seen a lot of different weather patterns, floods and bushfires (including putting out spot-fires on our property during Black Saturday of 2009) and everything in between and you pay close attention when your livelihood depends on it.

This is a genuine post to understand and document my knowledge of the anthropogenic climate change issue, a debate which seems to have two polarising extremes often ending in tense emotional exchanges. I’m on a learning journey and have been a critical thinker & skeptic for most of my life, not always accepting things at face value, this post attempts to explain my current views on the anthropogenic climate change debate which appears to be much more than science, often religious & ideological in nature with extremist & alarmists on both sides (climate deniers, climate alarmists etc), with very little middle ground, acceptance of alternative views or inclination to holistically explore all sides of the climate debate; could both sides of the debate have valid views and points?

I suggest there is wide acceptance amongst scientists and commentators that human activity has had an impact on the Earths climate, that carbon levels have been increasing (in part due to human emissions) and recently worldwide temperature has been rising. I believe the questions boils down to how much impact is anthropogenic climate change having, are we in a crisis / emergency situation, should we wait and see, or should we accept human activity has minimal impact within Earths natural climate cycles?

If we look at the climate change as an equation it would balance out like:

  • anthropogenic climate change + natural climate variability = total climate change

If either side of the equation is incorrect or unbalanced,  we misattribute the causes of climate change and potentially make very large, costly and socially impacting policy changes or under act and let a climate catastrophe unfold. If we took some time to step away from the issue, we probably all agree humans are destroying our planet and I think we should focus on absolute sustainability, measured on an individual, community and national level (i.e. sustainability index). We’re causing significant harm to the planet through some of the below human activities:

  • Deforestation, expanding farming and human habitats; reducing natural habitat
  • Consuming and depleting non-renewable resources such as oil, gas, coal, minerals & metals
  • Over harvesting renewable resources such as game, fishing, forests, fresh water, oxygen, soil & impacting biodiversity etc.
  • Polluting our environment with plastics and rubbish, including our oceans, land and atmosphere impacting life
  • Many of Earths species are going extinct (some naturally through evolution, some unnaturally through human activity)

Thoughts on Climate Change:

  • Earths climate has been changing since creation and will continue to do so; the only certainty is change. There seems to be a consistent climate cycle over thousands, hundreds of thousands and millions of years.
  • A 100 years of human measured data is likely not enough to make any confident assertions. Recent temperature increases (i.e. 15-20 years period) probably shouldn’t be used to project future temperature; this is dangerous and could be considered a symptom of the “recency effect”.
  • The science behind man made (anthropogenic) climate change and its impacts don’t appear to be settled, there is little empirical evidence (i.e. hypothesis and proof at statistical significance, repeatable, with extraneous & confounding variables controlled, run in an open complex system (i.e impossible)) or longitudinal research that I’m currently aware of  to suggest it will have significant impact on the Earths climate.
  • The claim that 97% of scientists agree that humans are the main cause of global warming has been historically analysed and debunked by Dr John Robson who described in this video:
    • There are no disputes from scientists that:
      • Nearly all scientists agree carbon is a green house gas and has some warming effect
      • The Earth has been slightly warming since the 1800’s
      • Humans have changed the environment of our planet, including releasing carbon, changing the landscape, polluting etc
    • Limited survey data on scientists views, including whether we are facing a climate emergency, severe issues with survey designs and misattribution / incorrect expansion of responses. He found 97% consensus was amongst 2% of climate expert respondents to this survey.
    • Australian researcher John Cook who examined 12,000 scientific papers related to climate change found 97% of studies found green houses gases were partly responsible for global warming, however 66% of the studies expressed no views on the consensus, and extraordinarily only 0.3% of studies suggested most of the warming was attributed to human activity.
    • Economist Richard Tol later identified that two thirds of the 12K papers which endorsed weak man-made consensus actually said nothing at all about anthropogenic climate change.
  • Earths climate is incredible complex, even the IPCC admits their inability to accurately predict future climate and limitations with climate models (see IPCC IPCC AR3 Report Annex section on climate prediction and climate projection) – “Climate projections are distinguished from climate
    predictions in order to emphasize that climate projections” …. …. …. “, and are therefore subject to substantial uncertainty.”
  • The IPCC climate models are based on hundreds of parameters and haven’t been shown to be anywhere near accurate; misleading at best, completely wrong at worst. Most predictions to date have been wrong, see Climate Model reviews since 1975, peer review of 17 climate models here and a Dr Roy Spencer review of CMIP5 models to observations of atmospheric warming (noting even small model errors, when projected over a long future period can be significantly misleading).
  • Have climate models included all variables (like solar activity, solar forcing, clouds creating an umbrella effect etc) as identified in this recent study?
  • Most of the carbon experiments and impacts on temperature have been conducted as closed-systems experiments and then extrapolated to predict the impact to Earths climate (a complex open system with thousands, if not hundreds of thousands of variables)
  • Abrupt climate change has occurred in Earths past as documented here.
  • Earths climate has changed significantly from ice ages to warm periods (mini ice age & medieval warm periods) and continues to do so in long cycles – see geologic temperate record.
  • Carbon levels in our atmosphere have been much higher in the past (see here) and are beneficial for life on earth and agriculture:- more carbon === more life.
  • 31,486+ American scientists, including engineers in relevant disciplines, signed the Global Warming Petition requesting more evidence and proof of anthropogenic climate change and said impacts to the Earths climate:
  • Climate hysteria has been a norm within our human existence from the very beginning, through crop failures, famines, fires, floods, diseases, cyclones, tsunamis, earthquakes etc and always blamed something; the gods, religion, enemies, luck/misfortune, numerology, astrology superstition, scientists, greenies etc, there have been hundreds of newspaper articles (over the last hundred+ years, see this article for 41 recent claims) suggesting our climate was doomed, our ice caps would melt, we’re about to enter an ice age and everything in between.
  • We’ve always had fires, floods, droughts and plagues in Australia, a cycle of life on this continent.

Questions To Climate Opponents & Proponents:

  • What evidence do you have to support anthropogenic climate change which has been completed in an open system / environment and therefore could be used to genuinely predict carbon based climate change?
  • How do you explain the natural / variable changes in Earths climate history through ice-core samples, sediment analysis, tree rings, geologic measures etc?
  • What part do you think solar activity, the sun and sunspots cycles, solar winds, electromagnetic fields/waves, cosmic rays, ultraviolet and x-ray radiation &  polar shifts play in Earths climate activity?
  • What about the Sun’s grand solar minimum and sun spot cycles – what impact do they play on Earths climate?
  • Is our climate weather changing outside outside a normal distribution / extremes?
  • Are our droughts / fires getting worse outside a normal distribution / extremes?
  • With regards to climate models; what happens when you use a model or formula which contains a small error (even if only 0.1-0.3%), and then forecast out 10, 20, 50 years based on this model/formula?
    How does the compounding error effect of the model/formula error impact the result?
  • Are we facing a climate catastrophe, causing irreversible damage, partly having an impact on climate, or are human carbon emissions negligible in the grand scheme?
  • Who benefits from the anthropogenic climate change agenda?
  • Who is funding both sides of the debate (coal / mining / oil, conservatives, Murdoch on one side, and climate change research funding, Greens, Soros on other etc), why are we not able to reach a middle ground or consensus?

What We Should Do As a Society:

  • Reduce materialism, recycle, re-use, review impact on environment on each new purchase (make visible on each label / fact sheet)
  • Remove food and temporary ‘convenience’ plastics and create a plastic-tax on these items
  • Produce all food locally, consume only whats in season. Minimise transportation of food across continents (ie limit global trade for items which can be produced or sourced locally).
  • Protect our natural habitats and non-human species
  • Create a massive world wide movement (led by World Governments) to clean up our pollution to date including economic incentives to do so
  • Create an individual, community and national level sustainability index to measure our impact on Earth. Make this each countries economic target (rather than GDP)
  • Ensure all citizens of the earth have basic access to clean water, basic housing, electricity, communications & internet
  • Ensure we support free market principles for business and universal socialised services such as healthcare, education, policing, firefighting, ambulance, dentists, environmental protection & prosperity etc.
  • Measure all resources sustainability levels to ensure a balanced and thriving Earth.

My current belief is we simply don’t understand or can predict climate change with accuracy and there is a lot of speculation treated as facts; ideological and religious. Most climate models and parameters are at best misleading, at worst completely wrong, however we probably agree humans are slowly destroying the planet and we should focus on sustainability (rather than climate religion). Should we humans try to control all things?
We simply can’t and shouldn’t be expected too, some things will be outside human influence and we should accept and deal with them the best we can to minimise impact. I hope as a species we can work together (from all sides, including the extremes) to agree on a plan to leave Earth in a better place than when we arrived.

Can we work together?

Do both sides have solid views and both be correct?

Does the debate have to be so extreme?

What can we do as a group to be sustainable?

I love to learn and continue to evolve my views, id be very happy to critically explore any points you have to raise, please leave your comments below!




Lean Enterprise – How High Performance Organisations Innovate at Scale Notes

Jez Humble, Joanne Moleksy & Barry O’Reilly have teamed up to deliver an excellent book on applying lean and agile practices to enterprise business. The book focus on how to maximise product discovery, product development, validated learning through experimentation, prioritisation through cost of delay, lean governance principles and modern funding practices to maximise value delivered in the shortest time. The book provides case studies on how traditional enterprise practices such as architecture, project management office (PMO), change management, security and operations can apply similar lean product development methods to maximise value creation. It provides and overview on modern software practices such as continuous delivery, test automation, experimentation and flow based value creation.

Please see below book notes / snippets (copied from my Kindle highlights with some minor edits, without permission). I highly recommend buying the book & keeping as your lean, agile bible on how to continuously learn and get things done.

Lean Enterprise Book


On Running a Lean Business

Part I. Orient

The business world is moving from treating IT as a utility that improves internal operations to using rapid software- and technology-powered innovation cycles as a competitive advantage. Shareholder value is the dumbest idea in the world…[it is] a result, not a strategy…Your main constituencies are your employees, your customers, and your products. Research has shown that focusing only on maximising profits has the paradoxical effect of reducing the rate of return on investment. Rather, organisations succeed in the long term through developing their capacity to innovate and adopting the strategy articulated by Jack Welch in the above epigraph: focusing on employees, customers, and products.

The Toyota Production System (TPS) makes building quality into products the highest priority, so a problem must be fixed as soon as possible after it’s discovered, and the system must then be improved to try and prevent that from happening again. TPS introduced the famous andon cord process. In contrast, the heart of the TPS is creating a high-trust culture in which everybody is aligned in their goal of building a high-quality product on demand and where workers and managers collaborate across functions to constantly improve — and sometimes radically redesign — the system. These ideas from the TPS — a high-trust culture focused on continuous improvement (kaizen), powered by alignment and autonomy at all levels — are essential to building a large organisation that can adapt rapidly to changing conditions. The TPS, instead, requires workers to pursue mastery through continuous improvement, imbues them with a higher purpose — the pursuit of ever-higher levels of quality, value, and customer service — and provides a level of autonomy by empowering them to experiment with improvement ideas and to implement those that are successful.

Giving people pride in their work rather than trying to motivate them with carrots and sticks is an essential element of a high-performance culture.

The TPS does away with the concept of seniority in which union workers are assigned jobs based on how many years of service they have, with the best jobs going to the most senior. Under the TPS, everybody has to learn all the jobs required of their team and rotate through them. Toyota has always been very open about what it is doing, giving public tours of its plants, even to competitors — partly because it knows that what makes the TPS work is not so much any particular practices but the culture. Many people focus on the practices and tools popularised by the TPS, such as the andon cords. One GM vice president even ordered one of his managers to take pictures of every inch of the NUMMI plant so they could copy it precisely. The result was a factory with andon cords but with nobody pulling them because managers (following the principle of extrinsic motivation) were incentivised by the rate at which automobiles — of any quality — came off the line.

The key to understanding a lean enterprise is that it is primarily a human system.

  • Pathological organisations are characterised by large amounts of fear and threat. People often hoard information or withhold it for political reasons, or distort it to make themselves look better.
  • Bureaucratic organisations protect departments. Those in the department want to maintain their “turf,” insist on their own rules, and generally do things by the book — their book.
  • Generative organisations focus on the mission. How do we accomplish our goal? Everything is subordinated to good performance, to doing what we are supposed to do.
How organisations process information

Figure 1. How organisations process information

Analysis showed that firms with high-performing IT organisations were twice as likely to exceed their profitability, market share, and productivity goals.

The survey also set out to examine the cultural factors that influenced organisational performance. The most important of these turned out to be whether people were satisfied with their jobs, based on the extent to which they agreed with the following statements (which are strongly reminiscent of the reaction of the NUMMI workers who were introduced to the Toyota Production System):

  • I would recommend this organisation as a good place to work.
  • I have the tools and resources to do my job well.
  • I am satisfied with my job.
  • My job makes good use of my skills and abilities.

Statistical analysis of the results showed that team culture was not only strongly correlated with organisational performance, it was also a strong predictor of job satisfaction. The results are clear: a high-trust, generative culture is not only important for creating a safe working environment — it is the foundation of creating a high-performance organisation.

Mission Command as an alternative to Command & Control

Command and control: the idea from scientific management that people in charge make the plans and the people on the group execute them is an outdated model, highlighted in the defeat of the Prussian Army in 1806 my Napoleon. Scharnhorst noted that Napoleon’s officers had the authority to make decisions as the situation on the ground changed, without waiting for approval through the chain of command. This allowed them to adapt rapidly to changing circumstances. “No plan survives contact with the enemy”,  instead, he has this advice: “The higher the level of command, the shorter and more general the orders should be”. Crucially, orders always include a passage which describes their intent, communicating the purpose of the orders. This allows subordinates to make good decisions in the face of emerging opportunities or obstacles which prevent them from following the original orders exactly (called Auftragstaktik or Mission Command).

Friction and Complex Adaptive Systems Clausewitz’ concept of friction is an excellent metaphor to understand the behaviour of complex adaptive systems such as an enterprise (or indeed any human organisation). Bungay argues that friction creates three gaps:

    • First, a knowledge gap arises when we engage in planning or acting due to the necessarily imperfect state of the information we have to hand, and our need to making assumptions and interpret that information.
    • Second, an alignment gap is the result of people failing to do things as planned, perhaps due to conflicting priorities, misunderstandings, or simply someone forgetting or ignoring some element of the plan.
  • Finally, there is an effects gap due to unpredictable changes in the environment, perhaps caused by other actors, or unexpected side effects producing outcomes that differ from those we anticipated.
Friction create three gaps, and how to manage them

Figure 2. Friction create three gaps, and how to manage them

 

This principle is applied in multiple contexts:

Budgeting and financial management

  • Instead of a traditional budgeting process which requires all spending for the next year to be planned and locked down based on detailed projections and business plans, we set out high-level objectives across multiple perspectives such as people, organisation, operations, market, and finance that are reviewed regularly. This kind of exercise can be used at multiple levels, with resources allocated dynamically when needed and the indicators reviewed on a regular basis.

Program management

  • Instead of creating detailed, upfront plans on the work to be done and then breaking that work down into tiny little bits distributed to individual teams, we specify at the program level only the measurable objectives for each iteration. The teams then work out how to achieve those objectives, including collaborating with other teams and continuously integrating and testing their work to ensure they will meet the program-level objectives.

Process improvement

  • Working to continuously improve processes is a key element of the TPS and a powerful tool to transform organisations. We present the Improvement Kata in which we work in iterations, specifying target objectives for processes and providing the people who operate the processes the time and resources to run experiments they need to meet the target objectives for the next iteration.

Crucially, these mission-based processes must replace the command and control processes, not run alongside them.

The long-term value of an enterprise is not captured by the value of its products and intellectual property but rather by its ability to continuously increase the value it provides to customers — and to create new customers — through innovation.

Technology adoption lifecycle, from Dealing with Darwin by Geoffrey A. Moore, 2006

Figure 3. Technology adoption lifecycle, from Dealing with Darwin by Geoffrey A. Moore, 2006

For this vision to become reality, there are two key assumptions that must be tested: the value hypothesis and the growth hypothesis.

We then design an experiment, called the minimum viable product, which we build in order to gather the necessary data from real customers to determine if we have a product/market fit. If our hypothesis is incorrect, we pivot, coming up with a new value hypothesis based on what we learned, following the steps above, every iteration will result in validated learning.

What is an Option?

Purchasing an option gives us the right, but not the obligation, to do something in the future (typically to buy or sell an asset at a fixed price). Options have a price and an expiry date. Investing a fixed amount of time and money to investigate the economic parameters of an idea — be it a business model, product, or an innovation such as a process change — is an example of using optionality to manage the uncertainties of the decision to invest further.

Optionality is a powerful concept that lets you defer decisions on how to achieve a desired outcome by exploring multiple possible approaches simultaneously.

“When we decided to do a microprocessor, in hindsight, I think I made two great decisions. I trusted the team, and gave them two things that Intel and Motorola had never given their people: the first was no money and the second was no people. They had to keep it simple.”

Whenever you hear of a new IT project starting up with a large budget, teams of tens or hundreds of people, and a timeline of many months before something actually gets shipped, you can expect the project will go over time and budget and not deliver the expected value.

Sadly, however, whether the project “succeeds” according to these criteria is irrelevant and insignificant when compared to whether we actually created value for customers and for our organisation. Data gathered from evolving web-based systems reveals that the plan-based approach to feature development is very poor at creating value for customers and the organisation. Amazon & Microsofts research reveals the “humbling statistics”: 60%–90% of ideas do not improve the metrics they were intended to improve. Based on experiments at Microsoft:

  • 1/3 of ideas created a statistically significant positive change,
  • 1/3 produced no statistically significant difference, and
  • 1/3 created a statistically significant negative change.

Due to a cognitive bias known as the planning fallacy, executives tend to “make decisions based on delusional optimism rather than on a rational weighing of gains, losses, and probabilities. They overestimate benefits and underestimate costs. They spin scenarios of success while overlooking the potential for mistakes and miscalculations. As a result, they pursue initiatives that are unlikely to come in on budget or on time or to deliver the expected returns — or even to be completed.”

Finally, because the project approach judges people according to whether work is completed on time and on budget, not based on the value delivered to customers, productivity gets measured based on output rather than outcomes.

We create an unsustainable “hero culture” that rewards overwork and high utilisation (making sure everybody is busy) rather than doing the least possible work to achieve the desired outcomes.

We describe how to run large-scale programs of work using the following principles:

  1. Define, measure, and manage outcomes rather than output. Applying the Principle of Mission, we specify “true north” for our program of work — our ideal stakeholder outcomes. Then, at the program level, we work iteratively, specifying for each iteration the measurable program-level outcomes we want to achieve. How to achieve these outcomes is delegated to teams working within the program. Based on the feedback from real customers after each iteration, we work to improve quality of demand, improve speed, and improve quality of outcomes.
  2. Manage for throughput rather than capacity or utilisation. We implement Kanban principles by making all work visible and limiting work in process. We then aim to stop starting work and start finishing it as soon as possible. We continuously undertake process improvement work to reduce lead time — the time it takes to deliver work — throughout the system. We use continuous delivery and work in small increments to make it cheap and low risk to deliver work in small batches with easier feedback loops.
  3. Ensure people are rewarded for favouring a long-view system-level perspective over pursuing short-term functional goals. People should be rewarded for continuous and effective (win-win) collaboration, for minimising the amount of work required to achieve the desired outcomes, and for reducing the complexity of the systems we create to enable these outcomes. People should not be punished when failures occur; rather, we must build a culture of experimentation and collaboration, design systems which make it safe to fail, and put in place processes so we can learn from our mistakes and use this information to make our systems more resilient.

Balancing the Enterprise Portfolio

The 3 product or business horizons

Figure 4. The 3 product or business horizons

The problems occur when the acquired company — working on a horizon 3 or 2 product — is subjected to the horizon 1 governance, financial targets, and management structures of the acquiring enterprise, completely destroying its ability to innovate.

Our hypothesis is that organisations survive and grow in the medium and long term by balancing the ability to continuously explore potential new business models with effective exploitation of existing ones.

Intuit uses a simple model to balance horizons 1, 2, and 3, Google follows a similar model, but with different allocations:

  • 70% to Horizon 1
  • 20% to Horizon 2
  • 10% to Horizon 3

Part II. Explore

When faced with a new opportunity or a problem to be solved, our human instinct is to jump straight to a solution without adequately exploring the problem space, testing the assumptions inherent in the proposed solution, or challenging ourselves to validate the solution with real users.

Our mission would be to prevent anybody from commencing a major program to solve the problem or pursue the opportunity until they do the following:

  • Define the measurable business outcome to be achieved
  • Build the smallest possible prototype capable of demonstrating measurable progress towards that outcome
  • Demonstrate that the proposed solution actually provides value to the audience it is designed for

“Even in projects with very uncertain development costs, we haven’t found that those costs have a significant information value for the investment decision. The single most important unknown is whether the project will be canceled…The next most important variable is utilisation of the system, including how quickly the system rolls out and whether some people will use it at all.”  Thus the business case essentially becomes a science fiction novel based in an universe that is poorly understood — or which may not even exist! Meanwhile significant time is wasted on detailed planning, analysis, and estimation, which provides large amounts of information with extremely limited value.

There are two factors we care about in a business plan. The first is the sensitivity of the key metric to the various variables in the business case. The second is the level of uncertainty in the variables to which the key metric is sensitive. Given distributions and ranges for the key variables, a simple but powerful approach is to perform a Monte Carlo simulation to work out the possible outcomes.

We should stop using the word “requirements” in product development, at least in the context of nontrivial features. What we have, rather, are hypotheses. We believe that a particular business model, or product, or feature, will prove valuable to customers. But we must test our assumptions. We can take a scientific approach to testing these assumptions by running experiments.

In Running Lean (O’Reilly), Ash Maurya explains how to execute a Lean Startup model:

  • Do not spend a lot of time creating a sophisticated business model. Instead, design a simplified business model canvas which captures and communicates the key operating assumptions of your proposed business model.
  • Gather information to determine if you have a problem worth solving — meaning that it is both solvable and people will pay for it to be solved. If both of these conditions obtain, we have achieved a problem/solution fit.
  • Then, design a minimum viable product (MVP) — an experiment designed to maximize learning from potential early adopters with minimum effort. In the likely case that the results of the MVP invalidate your product hypothesis, pivot and start again. Continue this process until you decide to abandon the initial problem, run out of resources, or discover a product/market fit. In the latter case, exit the explore phase and proceed to exploit the validated model.
  • Throughout this process, update the business model canvas based on what you learn from talking to customers and testing MVPs.

The purpose of measurement is not to gain certainty but to reduce uncertainty. The job of an experiment is to gather observations that quantitatively reduce uncertainty. The key principle to bear in mind is this: when the level of uncertainty of some variable is high, we need very little information to significantly reduce that uncertainty.

Definition of Measurement

Measurement:  A quantitively expressed reduction of uncertainty based on one or more observations.

This definition may seem counterintuitive unless you have experience running experiments in a scientific context. In experimental science, the result of a measurement is never simply a single value. It is, rather, a probability distribution which represents the range of possible values. Any measurement that doesn’t indicate the precision of the result is considered practically meaningless. For example, a measurement of my position with a precision of 1 meter is far more valuable than that same position with a precision of 500 miles. The point of investing in measurement in a scientific context is to reduce our uncertainty about the actual value of some quantity. Thus, in particular, if we express our estimates as precise numbers (as opposed to ranges), we are setting ourselves up for failure: the chance of us meeting a date 6 months in the future precisely to the day is practically zero.

Game theory actually provides a formula for the expected value of information (EVI). Hubbard defines the value of information as follows: “Roughly put, the value of information equals the chance of being wrong times the cost of being wrong.

The cost of being wrong — that is, what is lost if your decision doesn’t work out — is called an opportunity loss. When we multiply the opportunity loss by the chance of a loss, we get the expected opportunity loss (EOL). Calculating the value of information boils down to determining how much it will reduce EOL.

The OODA loop

Figure 4. The OODA loop

Boyd’s theory of maneuver warfare. OODA stands for observe, orient, decide, act, the four activities that comprise the loop.

Deming's Plan Do Check Act cycle

Figure 5. Deming’s Plan Do Check Act cycle

Deming cycle

When everybody in the organisation has been trained to employ the scientific approach to innovation as part of their daily work, we will have created a generative culture.

Traditional project planning versus Lean Startup Skill or behavior

Figure 6. Traditional project planning versus Lean Startup Skill or behaviour

Discovery is a rapid, time-boxed, interactive set of activities that integrates the practices and principles of design thinking and Lean Startup. “Design thinking takes a solution-focused approach to problem solving, working collaboratively to iterate an endless, shifting path towards perfection. It works towards product goals via specific ideation, prototyping, implementation, and learning steps to bring the appropriate solution to light.”

As Dan Pink argues in Drive, there are three key elements to consider when building an engaged and highly motivated team. First, success requires a shared sense of purpose in the entire team. The vision needs to be challenging enough for the group to have something to aspire to, but clear enough so that everyone can understand what they need to do. Second, people must be empowered by their leaders to work autonomously to achieve the team objectives. Finally, people need the space and opportunity to master their discipline, not just to learn how to achieve “good enough.”

Go Gamestorming

Gamestorming by David Gray et al. and the supporting Go Gamestorming Wiki, contain numerous games that encourage engagement and creativity while bringing structure and clarity to collaborative ideation, innovation, and improvement workshops.

Divergent Thinking

Figure 7. Divergent Thinking

Divergent thinking is the ability to offer different, unique, or variant ideas adherent to one theme; convergent thinking is the ability to identify a potential solution for a given problem. We start exploration with divergent thinking exercises designed to generate multiple ideas for discussion and debate. We then use convergent thinking to identify a possible solution to the problem. From here, we are ready to formulate an experiment to test it.

 

Business Model Canvas

Figure 8. Business Model Canvas

The Business Model Canvas, shown in Figure 8, was created by Alex Osterwalder and Yves Pigneur along with 470 co-creators as a simple, visual business model design generator. It is a strategic management and entrepreneurial tool that enables teams to describe, design, challenge, invent, and pivot business models.The Business Model Canvas, freely available at http://www.businessmodelgeneration.com/canvas

Beyond the template itself, Osterwalder also came up with four levels of strategic mastery of competing on business models to reflect the strategic intent of an organization:

  • Level 0 Strategy The Oblivious focus on product/value propositions alone rather than the value proposition and the business model.
  • Level 1 Strategy The Beginners use the Business Model Canvas as a checklist.
  • Level 2 Strategy The Masters outcompete others with a superior business model where all building blocks reinforce each other (e.g., Toyota, Walmart, Dell).
  • Level 3 Strategy The Invincible continuously self-disrupt while their business models are still successful (e.g., Apple, Amazon).

There are a number of canvas created by others that focus on product development:

  • The Lean Canvas: Makes the assumption that product/market fit is the riskiest hypothesis that must be tested.
  • The Opportunity Canvas: Focuses discussions about what we’re building and why, then helps you understand how satisfying those specific customers and users furthers the organisation’s overall strategy.
  • Value Proposition Canvas: Describes how our products and services create customer gains and how they create benefits our customers expect, desire, or would be interesting in using.

Minimal Viable Product Definition

Confusingly, people often refer to any validation activity anywhere along on this spectrum as an MVP, overloading the term and understanding of it in the organisation or wider industry. Marty Cagan, author of Inspired: How to Create Products Customers Love and ex-SVP for eBay, notably uses the term “MVP test” to refer to what Eric Ries calls an MVP. Cagan defines an MVP as “the smallest possible product that has three critical characteristics: people choose to use it or buy it; people can figure out how to use it; and we can deliver it when we need it with the resources available — also known as valuable, usable, and feasible,” to which we add “delightful,” since design and aesthetics are also as essential for an MVP as for a finished product.

Minimal Viable Product - Usable, Valuable, Feasible & Delightful

Figure 9. Minimal Viable Product – Usable, Valuable, Feasible & Delightful

MVPs, as shown in figure 10. do not guarantee success; they are designed to test the assumptions of a problem we wish to solve without over-investing. By far the most likely outcome is that we learn our assumptions were invalid and we need to pivot or stop our approach. Our ultimate goal is to minimize investment when exploring solutions until we are confident we have discovered the right product — then, exploit the opportunity by adding further complexity and value to build the product right.

Lean MVP methods

Figure 10. An example set of types of lean MVPs

Paul Graham, http://paulgraham.com/ds.html

Cagan defines vision as the shared understanding that “describes the types of services you intend to provide, and the types of customers you intend to serve, typically over a 2-5 year timeframe”

One Metric That Matters

One Metric That Matters (OMTM) is a single metric that we prioritize as the most important to drive decisions depending on the stage of our product lifecycle and our business model. It is not a single measure that we will use throughout our product lifetime: it will change over time depending on the problem area we wish to address. We focus on One Metric That Matters to:

  • Answer the most pressing question we have by linking it to the assumptions in the hypothesis we want to test
  • Create focus, conversation, and thought to identify problems and stimulate improvement
  • Provide transparency and a shared understanding across the team and wider organization
  • Support a culture of experimentation by basing it on rates or ratios, not averages or totals, relevant to our historical dataset It should not be a lagging metric such as return on investment

(ROI) or customer churn, both of which measure output after the fact. Lagging indicators become interesting later when we have achieved a product/market fit. By initially focusing on leading metrics, we can get an indication of what is likely to happen — and address a situation quicker to try and change the outcomes going forward.

The purpose of the OMTM is to gain objective evidence that the changes we are making to our product are having a measurable impact on the behavior of our customers. Ultimately we are seeking to understand:

  • Are we making progress (the what)?
  • What caused the change (the why)?
  • How do we improve (the how)?

Use A3 Thinking as a Systematic Method for Realizing Improvement Opportunities

  • A3 Thinking is composed of 7 elements embedding the Plan-Do-Check-Act cycle of experimentation:
    • Background
    • Current condition and problem statement
    • Goal statement
    • Root-cause analysis
    • Countermeasures
    • Check/confirmation effect
    • Followup actions and report

Other examples include the Five Ws and One H (Who, What, Where, When, Why, How).

Figure 11. A3 Thinking

Figure 11. Example of A3 Thinking on a page

 

Remember, metrics are meant to hurt — not to make us feel like we are winning. They must be actionable and trigger a change in our behavior or understanding. We need to consider these two key questions when deciding on what our OMTM will be:

What is the problem we are trying to solve?

  • Product development
  • Tool selection
  • Process improvement

What stage of the process are we at?

  • Problem validation
  • Solution validation
  • MVP validation

Eric Ries introduced the term innovation accounting to refer to the rigorous process of defining, experimenting, measuring, and communicating the true progress of innovation for new products, business models, or initiatives.

Profitability to Sales ratio for early stage innovations

Figure 12. Profitability to Sales ratio for early stage innovations

 

Measurement Fallacy

Unfortunately, often what we tend to see collected and socialized in organizations are vanity metrics designed to make us feel good but offering no clear guidance on what action to take. In Lean Analytics, Alistair Croll and Benjamin Yoskovitz note, “If you have a piece of data on which you cannot act, it’s a vanity metric…A good metric changes the way you behave. This is by far the most important criterion for a metric: what will you do differently based on changes in the metric?”

Examples of vanity versus actionable metrics

List of vanity metrics vs actionable metrics

Figure 13. List of vanity metrics vs actionable metrics

Vanity vs. Actionable metrics

  • Number of visits vs. Funnel metrics, cohort analysis
  • Time on site, number of pages vs. Number of sessions per user
  • Emails collected vs. Email action
  • Number of downloads vs. User activations
  • Tool usage vs. Tooling effect
  • Number of trained people vs. Higher throughput

“If you can define the outcome you really want, give examples of it, and identify how those consequences are observable, then you can design measurements that will measure the outcomes that matter. The problem is that, if anything, managers were simply measuring what seemed simplest to measure (i.e., just what they currently knew how to measure), not what mattered most.”

Pirate AARRR metrics

Figure 14. Pirate AARRR metrics

Pirate metrics: AARRR!

Sample innovation scorecard

Figure 15. Sample innovation scorecard

Example innovation scorecard

In terms of governance, the most important thing to do is have a regular weekly or fortnightly meeting which includes the product and engineering leads within the team, along with some key stakeholders from outside the team (such as a leader in charge of the Horizon 3 portfolio and its senior product and engineering representatives).

In the early stages, we must spend less time worrying about growth and focus on significant customer interaction. We may go so as far as to only acquire customers individually — too many customers too early can lead to a lack of focus and slow us down. We need to focus on finding passionate early adopters to continue to experiment and learn with. Then, we seek to engage similar customer segments to eventually “cross the chasm” to wider customer acquisition and adoption.

Our goal should to be to create a pull system for customers that want our product, service, or tools, not push a mandated, planned, and baked solution upon people that we must “sell” or require them to use.

Our runway should be a list of hypotheses to test, not a list of requirements to build. When we reward our teams for their ability to deliver requirements, it’s easy to rapidly bloat our products with unnecessary features — leading to increased complexity, higher maintenance costs, and limited ability to change. Features delivered are not a measure of success, business outcomes are.

User Story Map sample

Figure 16. User Story Map sample

Create a Story Map to Tell the Narrative of the Runway of Our Vision Story maps are tool developed by Jeff Patton, explained in his book, User Story Mapping. As Patton states, “Your software has a backbone and a skeleton — and your map shows it.”

Our advice is this. There are two practices that should be adhered to from the beginning that will allow us to pay down technical debt later on: continuous integration and a small number of basic unit and user-journey tests.

Having forced ourselves to do something that should be unnatural to engineers — hack out embarrassingly crappy code and get out of the building to get validation from early on — we must then pull the lever hard in the other direction, kill the momentum, and transition our focus from building the right thing to building the thing right. Needless to say, this requires extreme discipline.

In The Lean Startup, Eric Ries argues that there are three key strategies for growth — choose one:

  • Viral
    • Includes any product that causes new customers to sign up as a necessary side effect of existing customers’ normal usage: Facebook, MySpace, AIM/ICQ, Hotmail, Paypal. Key metrics are acquisition and referral, combined into the now-famous viral coefficient.
  • Pay
    • Is when we use a fraction of the lifetime value of each customer and flow that back into paid acquisition through search engine marketing, banner ads, public relations, affiliates, etc. The spread between your customer lifetime value and blended customer acquisition cost determines either your profitability or your rate of growth, and a high valuation depends on balancing these two factors. Retention is the key goal in this model. Examples are Amazon and Netflix.
  • Sticky
    • Means something causes customers to become addicted to the product, and no matter how we acquire a new customer, we tend to keep them. The metric for sticky is the “churn rate” — the fraction of customers in any period who fail to remain engaged with our product or service. This can lead to exponential growth. For eBay, stickiness is the result of the incredible network effects of their business. For enterprises, however, there are further growth options to consider:
  • Expand
    • Is building an adaptive initial business model that we could simply evolve and expand further by opening up new geographies, categories, and adjacencies. Amazon has executed this strategy excellently, moving from selling books to an e-commerce store offering new retail categories. With this growth strategy, the initial targeted market should be large enough to support multiple phases of growth over time.
  • Platform
    • Once we have a successful core product, we transform it into a platform around which an “ecosystem” of complementary products and services is developed by both internal and external providers. Microsoft did this with Windows by creating MS Office, Money, and other support packages, including those developed by external vendors. Other platform examples include Apple’s AppStore, Salesforce’s Force.com, and Amazon’s Marketplace and Web Services offerings.

Part 111. Exploit

Water Scrum Fall

Figure 17. Water — Scrum — Fall

Figure 17. shows a typical “Water — Scrum — Fall method” project based paradigm established post WW2 to work on large military / aviation / space based projects. It represents a traditional phase-gate project paradigm, where no value is delivered until units were fully manufactured. Through detailed specifications very little change occurred in response to new information. None of these criteria apply to software based systems today.

We will present the following principles for lean-agile product development at scale:

  • Implement an iterative continuous improvement process at the leadership level with concise, clearly specified outcomes to create alignment at scale, following the Principle of Mission.
  • Work scientifically towards challenging goals, which will lead you to identifying and removing – or avoiding – no value-add activity.
  • Use continuous delivery to reduce the risk of releases, decrease cycle time and make it economic to work in small batches.
  • Evolve an architecture that supports loosely coupled customer facing teams which have autonomy in how they work to achieve the program-level outcomes
  • Reduce batch sizes and take experimental approach to the product development process
  • Increase and amplify feedback loops to make smaller, more frequent decisions based on the information we learn from performing our work to maximise customer value.

Achieving high performance in organisations that treat software as a strategic advantage relies on alignment between the IT function and the rest of the organisation, along with the ability of IT to execute.

The researchers concluded that to achieve high performance, companies that rely on software should focus first and foremost on their ability to execute, build reliable systems, and work to continually reduce complexity. Only then will pursuing alignment with business priorities pay off.

They approached this by using activity accounting — allocating costs to the activities the team is performing.

Money spent on support is generally serving failure demand, as distinct from value demand, which was only driving 5% of the team’s costs.

With Improvement Kata, everybody should be running experiments on a daily basis. Each day, people in the team go through answering the following five questions:

  1. What is the target condition?
  2. What is the actual condition now?
  3. What obstacles do you think are preventing you from reaching the target condition? Which one are you addressing now?
  4. What is your next step? (Start of PDCA cycle.) What do you expect?
  5. When can we go and see what we learned from taking that step?

As we continuously repeat the cycle, we reflect on the last step taken to introduce improvement. What did we expect? What actually happened? What did we learn? We might work on the same obstacle for several days.

Tom Gilb proposed in his 1988 work Principles of Software Engineering Management:

[perfectpullquote align=”full” bordertop=”false” cite=”” link=”” color=”” class=”” size=””]

We must set measurable objectives for each next small delivery step. Even these are subject to constant modification as we learn about reality. It is simply not possible to set an ambitious set of multiple quality, resource, and functional objectives, and be sure of meeting them all as planned. We must be prepared for compromise and trade-off. We must then design (engineer) the immediate technical solution, build it, test it, deliver it — and get feedback. This feedback must be used to modify the immediate design (if necessary), modify the major architectural ideas (if necessary), and modify both the short-term and the long-term objectives (if necessary).

[/perfectpullquote]

Even today, many people think that Lean is a management-led activity and that it’s about simply cutting costs. In reality, it requires investing to remove waste and reduce failure demand — it is a worker-led activity that, ultimately, can continuously drive down costs and improve quality and productivity.

It’s often hard to make the outcome of improvement work tangible — which is why it’s important to make it visible by activity accounting, including measuring the cycle time and the time spent serving failure demand such as rework.

Identifying Value and Increase Flow

The actual number used to prioritise features is known as cost of delay divided by duration (or “CD3”). It is calculated as cost of delay for a feature divided by the amount of time we estimate it will take to develop and deliver that feature. This takes into account the fact that we have limited people and resources available to complete work, and that if a particular feature takes a long time to develop it will “push out” other features.

The best way to understand where problems start is by performing an activity called value stream mapping.

Sample Value Stream Map

Figure 18. Sample Value Stream Map

 

We can visualise the dynamics of the value stream by creating a cumulative flow diagram that shows the amount of work in each queue and process block over time. A cumulative flow diagram showing delivery progress over time, the different phases/queues work flows from (from Backlog to Validated Learning), identifies work in progress (WIP) and average lead time as shown in Figure 19.

Cumulative Flow Diagram

Figure 19. Cumulative Flow Diagram

 

The Kanban Method offers a comprehensive way to manage the flow of work through the product development value stream by using the following practices:

  • Visualise workflow by creating a board showing the current work in process within the value stream in real time.
  • Limit work in process by setting WIP limits for each process block and queue within a value stream, and updating them in order to trade off lead time against utilisation (how busy people are).
  • Define classes of service for different types of work and the processes through which they will be managed, to ensure that urgent or time-sensitive work is prioritised appropriately.
  • Create a pull system by agreeing on how work will be accepted into each process block when capacity becomes available — perhaps by setting up a regular meeting where stakeholders decide what work should be prioritised based on available capacity.
  • Hold regular “operational reviews” for the stakeholders within each process block to analyse their performance and update WIP limits, classes of service, and the method through which work is accepted.
Kanban board

Figure 20. Sample Kanban board

Reducing lead times in this way requires that there be sufficient slack in the system to manage the WIP effectively.

The Fundamentals of Continuous Delivery

There are two golden rules of continuous delivery that must be followed by everybody:

  1. The team is not allowed to say they are “done” with any piece of work until their code is in trunk on version control and releasable (for hosted services the bar is even higher — “done” means deployed to production). In The Lean Startup, Eric Ries argues that for new features that aren’t simple user requests, the team must also have run experiments on real users to determine if the feature achieves the desired outcome.
  2. The team must prioritise keeping the system in a deployable state over doing new work. This means that if at any point we are not confident we can take whatever is on trunk in version control and deliver it to users through an automated, push-button process, we need to stop working and fix that problem.

To find out if you’re really doing CI, ask your team the following questions:

  • Are all the developers on the team checking into trunk (not just merging from trunk into their branches or working copies) at least once a day? In other words, are they doing trunk-based development and working in small batches?
  • Does every change to trunk kick off a build process, including running a set of automated tests to detect regressions?
  • When the build and test process fails, does the team fix the build within a few minutes, either by fixing the breakage or by reverting the change that caused the build to break?

If the answer to any of these questions is “no,” you aren’t practicing continuous integration.

The most important principle for doing low-risk releases is this: decouple deployment and release. To understand this principle, we must first define these terms. Deployment is the installation of a given version of a piece of software to a given environment. The decision to perform a deployment — including to production — should be a purely technical one. Release is the process of making a feature, or a set of features, available to customers. Release should be a purely business decision.

Organisations and continuous delivery maturity

Figure 21. As organizations work to implement continuous delivery, they will have to change the way they approach version control, software development, architecture, testing, and infrastructure and database management

However, we do not propose solutions to achieve these goals or write stories or features (especially not “epics”) at the program level. Rather, it is up to the teams within the program to decide how they will achieve these goals. This is critical to achieving high performance at scale, for two reasons:

  • The initial solutions we come up with are unlikely to be the best. Better solutions are discovered by creating, testing, and refining multiple options to discover what best solves the problem at hand.
  • Organisations can only move fast at scale when the people building the solutions have a deep understanding of both user needs and business strategy and come up with their own ideas.

A program-level backlog is not an effective way to drive these behaviours — it just reflects the almost irresistible human tendency to specify “the means of doing something, rather than the result we want.”

Getting to Target Conditions

Gojko Adzic presents a technique called impact mapping to break down high-level business goals at the program level into testable hypotheses. Adzic describes an impact map as “a visualisation of scope and underlying assumptions, created collaboratively by a cross-functional group of stakeholders. It is a mind-map grown during a discussion facilitated by answering the following questions:

  1. Why?
  2. Who?
  3. How?
  4. What?”
Impact Mapping

Figure 22. Impact Mapping

Once we have a prioritised list of target conditions and impact maps created collaboratively by technical and business people, it is up to the teams to determine the shortest possible path to the target condition. This tool differs in important ways from many standard approaches to thinking about requirements. Here are some of the important differences and the motivations behind them:

  • There are no lists of features at the program level
    • Features are simply a mechanism for achieving the goal. To paraphrase Adzic, if achieving the target condition with a completely different set of features than we envisaged won’t count as success, we have chosen the wrong target condition. Specifying target conditions rather than features allows us to rapidly respond to changes in our environment and to the information we gather from stakeholders as we work towards the target condition. It prevents “feature churn” during the iteration. Most importantly, it is the most effective way to make use of the talents of those who work for us; this motivates them by giving them an opportunity to pursue mastery, autonomy, and purpose.
  • There is no detailed estimation
    • We aim for a list of target conditions that is a stretch goal — in other words, if all our assumptions are good and all our bets pay off, we think it would be possible to achieve them. However, this rarely happens, which means we may not achieve some of the lower-priority target conditions. If we are regularly achieving much less, we need to rebalance our target conditions in favour of process improvement goals. Keeping the iterations short — 2–4 weeks initially — enables us to adjust the target conditions in response to what we discover during the iteration. This allows us to quickly detect if we are on a wrong path and try a different approach before we overinvest in the wrong things.
  • There are no “architectural epics”
    • The people doing the work should have complete freedom to do whatever improvement work they like (including architectural changes, automation, and refactoring) to best achieve the target conditions. If we want to drive out particular goals which will require architectural work, such as compliance or improved performance, we specify these in our target conditions.

First, we create a hypothesis based on our assumption. In Lean UX, Josh Seiden and Jeff Gothelf suggest the template shown  below to use as a starting point for capturing hypotheses.

[perfectpullquote align=”full” bordertop=”false” cite=”” link=”” color=”” class=”” size=””]

We believe that
[building this feature]
[for these people]
will achieve [this outcome].
We will know where we are successful when we see [this signal from the market].

[/perfectpullquote]

There are many different ways to conduct research, generate results and assess against your hypothesis. See figure 22. below which shows some different methods of user research, across four quadrants of generative, evaluative, quantitative and qualitative. For more on different types of user research, read UX for Lean Startups (O’Reilly) by Laura Klein.

Types of User Research

Figure 23. Types of User Research across four quadrants of generative, evaluative, quantitative and qualitative

The key outcome of an experiment is information: we aim to reduce the uncertainty as to whether the proposed work will achieve the target condition.

Ronny Kohavi, who directed Amazon’s Data Mining and Personalisation group before joining Microsoft as General Manager of its Experimentation Platform, reveal that 60%–90% of ideas do not improve the metric they were intended to improve. Thus if we’re not running experiments to test the value of new ideas before completely developing them, the chances are that about 2/3 of the work we are doing is of either zero or negative value to our customers — and certainly of negative value to our organisation, since this work costs us in three ways.

They were able to calculate a dollar amount for the revenue impact of performance improvements, discovering that “an engineer that improves server performance by 10 msec more than pays for his fully-loaded annual costs.”

One of the most common challenges encountered in software development is the focus of teams, product managers, and organisations on managing cost rather than value. This typically manifests itself in undue effort spent on zero-value-add activities such as detailed upfront analysis, estimation, scope management, and backlog grooming. These symptoms are the result of focusing on maximising utilisation (keeping our expensive people busy) and output (measuring their work product) — instead of focusing on outcomes, minimising the output required to achieve them, and reducing lead times to get fast feedback on our decisions.

Implement Mission Command

CEO Jeff Bezos turned this problem into an opportunity. He wanted Amazon to become a platform that other businesses could leverage, with the ultimate goal of better meeting customer needs. With this in mind, he sent a memo to technical staff directing them to create a service-oriented architecture, which Steve Yegge summarizes thus:

  1. All teams will henceforth expose their data and functionality through service interfaces.
  2. Teams must communicate with each other through these interfaces.
  3. There will be no other form of interprocess communication allowed: no direct linking, no direct reads of another team’s data store, no shared-memory model, no back-doors whatsoever. The only communication allowed is via service interface calls over the network.
  4. It doesn’t matter what technology they use. HTTP, Corba, Pubsub, custom protocols — doesn’t matter. Bezos doesn’t care.
  5. All service interfaces, without exception, must be designed from the ground up to be externalisable. That is to say, the team must plan and design to be able to expose the interface to developers in the outside world. No exceptions.
  6. Anyone who doesn’t do this will be fired.

Bezos hired West Point Academy graduate and ex-Army Ranger Rick Dalzell to enforce these rules. Bezos mandated another important change along with these rules: each service would be owned by a cross-functional team that would build and run the service throughout its lifecycle. As Werner Vogels, CTO of Amazon, says, “You build it, you run it.”

Amazon stipulated that all teams must conform to the “two pizza” rule: they should be small enough that two pizzas can feed the whole team — usually about 5 to 10 people. This limit on size has four important effects:

  1. It ensures the team has a clear, shared understanding of the system they are working on. As teams get larger, the amount of communication required for everybody to know what’s going on scales in a combinatorial fashion.
  2. It limits the growth rate of the product or service being worked on. By limiting the size of the team, we limit the rate at which their system can evolve. This also helps to ensure the team maintains a shared understanding of the system.
  3. Perhaps most importantly, it decentralises power and creates autonomy, following the Principle of Mission. Each two-pizza team (2PT) is as autonomous as possible. The team’s lead, working with the executive team, would decide upon the key business metric that the team is responsible for, known as the fitness function, that becomes the overall evaluation criteria for the team’s experiments. The team is then able to act autonomously to maximise that metric
  4. Leading a 2PT is a way for employees to gain some leadership experience in an environment where failure does not have catastrophic consequences — which “helped the company attract and retain entrepreneurial talent.”

To avoid the communication overhead that can kill productivity as we scale software development, Amazon leveraged one of the most important laws of software development — Conway’s Law: “Organisations which design systems…are constrained to produce designs which are copies of the communication structures of these organisations.” One way to apply Conway’s Law is to align API boundaries with team boundaries. In this way we can distribute teams all across the world. Organisations often try to fight Conway’s Law. A common example is splitting teams by function, e.g., by putting engineers and testers in different locations (or, even worse, by outsourcing testers). Another example is when the front end for a product is developed by one team, the business logic by a second, and the database by a third. Since any new feature requires changes to all three, we require a great deal of communication between these teams, which is severely impacted if they are in separate locations. Splitting teams by function or architectural layer typically leads to a great deal of rework, disagreements over specifications, poor handoffs, and people sitting idle waiting for somebody else.

In truly decentralised organisations, we follow the principle of subsidiarity: by default, decisions should be made by the people who are directly affected by those decisions. Higher levels of bureaucracy should only perform tasks that cannot be performed effectively at the local level — that is, the authority of higher levels of bureaucracy should be subsidiary to that of the local levels.

We ensure teams are aligned by using the Improvement Kata that is, by having iterations at the program level with defined target conditions and having teams collaborate to work out how to achieve them. Here are some strategies enterprises have successfully applied to create autonomy for individual teams:

  • Give teams the tools and authority to push changes to production
    • In companies such as Amazon, Netflix, and Etsy, teams, in many cases, do not need to raise tickets and have changes reviewed by an advisory board to get them deployed to production. In fact, in Etsy this authority is devolved not just to teams but to individual engineers. Engineers are expected to consult with each other before pushing changes, and certain types of high-risk changes (such as database changes or changes to a PCI-DSS cardholder data environment) are managed out of band. But in general, engineers are expected to run automated tests and consult with other people on their team to determine the risk of each change — and are trusted to act appropriately based on this information. ITIL supports this concept in the form of standard changes. All changes that launch dark (and which thus form the basis of A/B tests) should be considered standard changes. In return, it’s essential that teams are responsible for supporting their changes.
  • Ensure that teams have the people they need to design, run, and evolve experiments
    • Each team should have the authority and necessary skills to come up with a hypothesis, design an experiment, put an A/B test into production, and gather the resulting data. Since the teams are small, this usually means they are cross-functional with a mix of people: some generalists with one or two deep specialisms (sometimes known as “T-shaped” people8), along with specialist staff such as a database administrator, a UX expert, and a domain expert. This does not preclude having centralised teams of specialists who can provide support to product teams on demand.
  • Ensure that teams have the authority to choose the their own toolchain
    • Mandating a toolchain for a team to use is an example of optimising for the needs of procurement and finance rather than for the people doing the work. Teams must be free to choose their own tools. One exception to this is the technology stack used to run services in production. Ideally, the team will use a platform or infrastructure service (PaaS or IaaS) provided by internal IT or an external provider, enabling teams to self-service deployments to testing and (where applicable) production environments on demand through an API (not through a ticketing system or email). If no such system exists, or it is unsuitable, the team should be allowed to choose their own stack — but must be prepared to meet any applicable regulatory constraints and bear the costs of supporting the system in production.
  • Ensure teams do not require funding approval to run experiments
    • The techniques described in this book make it cheap to run experiments, so funding should not be a barrier to test out new ideas. Teams should not require approval to spend money up to a a certain limit
  • Ensure leaders focus on implementing Mission Command
    • In a growing organisation, leaders must continuously work to simplify processes and business complexity, to increase effectiveness, autonomy, and capabilities of the smallest organisational units, and to grow new leaders within these units.

Creating small, autonomous teams makes it economic for them to work in small batches. When done correctly, this combination has several important benefits:

  • Faster learning, improved customer service, less time spent on work that does not add value
  • Better understanding of user needs
  • Highly motivated people
  • Easier to calculate profit and loss 

Architecting for continuous delivery and service orientation means evolving systems that are testable and deployable. Testable systems are those for which we can quickly gain a high level of confidence in the correctness of the system without relying on extensive manual testing in expensive integrated environments. Deployable systems are those that are designed to be quickly, safely, and independently deployed to testing and (in the case of web-based systems) production environments. These “cross-functional” requirements are just as important as performance, security, scalability, and reliability, but they are often ignored or given second-class status.

Amazon did not replace their monolithic Obidos architecture in a “big bang” replacement program. Instead, they moved to a service-oriented architecture incrementally, while continuing to deliver new functionality, using a pattern known as the “strangler application.” As described by Martin Fowler, the pattern involves gradual replacement of a system by implementing new features in a new application that is loosely coupled to the existing system, porting existing functionality from the original application only where necessary.  Over time, the old application is “strangled” — just like a tree enveloped by a tropical strangler fig.

Part IV. Transform

To add further complexity to this problem, many of our traditional approaches to governance, risk, and compliance (GRC), financial management, procurement, vendor/supplier management, and human resources (recruiting, promotion, compensation) create additional waste and bottlenecks. These can only be eliminated when the entire organisation embraces lean concepts and everyone works together in the same direction.

In The Corporate Culture Survival Guide, Schein defines culture as “a pattern of shared tacit assumptions that was learned by a group as it solved its problems of external adaptation and internal integration, that has worked well enough to be considered valid and, therefore, to be taught to new members as the correct way to perceive, think, and feel in relation to those problems.”

Your Startup Is Broken: Inside the Toxic Heart of Tech Culture, provides another perspective, commenting that “our true culture is made primarily of the things no one will say…Culture is about power dynamics, unspoken priorities and beliefs, mythologies, conflicts, enforcement of social norms, creation of in/out groups and distribution of wealth and control inside companies.”

In his management classic The Human Side of Enterprise, Douglas McGregor describes two contrasting sets of beliefs held by managers he observed, which he calls Theory X and Theory Y. Managers who hold Theory X assumptions believe that people are inherently lazy and unambitious and value job security more than responsibility; extrinsic (carrot-and-stick) motivation techniques are the most effective to deal with workers. In contrast, Theory Y managers believe “that employees could and would link their own goals to those of the organisation, would delegate more, function more as teachers and coaches, and help employees develop incentives and controls that they themselves would monitor.”

People involved in non routine work are motivated by intrinsic factors summarised by Dan Pink as:

  1. Autonomy — the desire to direct our own lives.
  2. Mastery — the urge to get better and better at something that matters.
  3. Purpose — the yearning to do what we do in the service of something larger than ourselves.

Culture is hard to change by design. As Schein says, “Culture is so stable and difficult to change because it represents the accumulated learning of a group — the ways of thinking, feeling, and perceiving the world that have made the group successful.”

MIT Sloan Management Review, John Shook, Toyota City’s first US employee, reflected on how that cultural change was achieved:

  • What my NUMMI experience taught me that was so powerful was that the way to change culture is not to first change how people think, but instead to start by changing how people behave — what they do. Those of us trying to change our organisations’ culture need to define the things we want to do, the ways we want to behave and want each other to behave, to provide training and then to do what is necessary to reinforce those behaviours. The culture will change as a result…What changed the culture at NUMMI wasn’t an abstract notion of “employee involvement” or “a learning organisation” or even “culture” at all. What changed the culture was giving employees the means by which they could successfully do their jobs. It was communicating clearly to employees what their jobs were and providing the training and tools to enable them to perform those jobs successfully.

It’s hard to achieve sustained, systemic change without any crisis. In The Corporate Culture Survival Guide, Schein asks if crisis is a necessary condition of successful transformations; his answer is, “Because humans avoid unpredictability and uncertainty, hence create cultures, the basic argument for adult learning is that indeed we do need some new stimulus to upset the equilibrium. The best way to think about such a stimulus is as disconfirmation: something is perceived or felt that is not expected and that upsets some of our beliefs or assumptions…disconfirmation creates survival anxiety — that something bad will happen if we don’t change — or guilt — we realize that we are not achieving our own ideals or goals.”

Old and new approaches to cultural change

Figure 24. Old and new approaches to cultural change

Once people accept the need for change, they are confronted with the fear that they may fail at learning the new skills and behaviour required of them, or that they may lose status or some significant part of their identity — a phenomenon Schein calls learning anxiety. Schein postulates that for change to succeed, survival anxiety must be greater than learning anxiety, and to achieve this, “learning anxiety must be reduced rather than increasing survival anxiety.”

At the beginning of every postmortem, every participant should read aloud the following words, known as the Retrospective Prime Directive: “Regardless of what we discover, we understand and truly believe that everyone did the best job they could, given what they knew at the time, their skills and abilities, the resources available, and the situation at hand.”

Given that the culture of an organisation has such a dominant effect on the performance of individuals, should we care at all about the particular skills and attitudes of individuals? Instead of taking a “bank account” view that focuses on people’s existing capabilities, it’s more important to consider their ability to acquire new skills — particularly in the field of technology where useful knowledge and skills change rapidly.

Dwecks two mindsets - Fixed & Growth Mindset

Figure 25. Dwecks two mindsets – Fixed & Growth Mindset

Google has done a great deal of research into what makes for an effective recruiting process in the context of technology. The top three criteria are:

  • Learning ability, including the ability to “process on the fly” and “pull together disparate bits of information.”
  • Leadership, “in particular emergent leadership as opposed to traditional leadership. Traditional leadership is, were you president of the chess club? Were you vice president of sales? How quickly did you get there? We don’t care. What we care about is, when faced with a problem and you’re a member of a team, do you, at the appropriate time, step in and lead. And just as critically, do you step back and stop leading, do you let someone else? Because what’s critical to be an effective leader in this environment is you have to be willing to relinquish power.”
  • Mindset. “Successful bright people rarely experience failure, and so they don’t learn how to learn from that failure…They, instead, commit the fundamental attribution error, which is if something good happens, it’s because I’m a genius. If something bad happens, it’s because someone’s an idiot or I didn’t get the resources or the market moved.”

Bock goes on to observe that the most successful people at Google “will have a fierce position. They’ll argue like hell. They’ll be zealots about their point of view. But then you say, here’s a new fact, and they’ll go, Oh, well, that changes things; you’re right”. To deal with an uncertain future and still move forward, people should have “strong opinions, which are weakly held.”

Embrace Lean Thinking for Governance, Risk and Compliance

We often hear that Lean Startup principles and the techniques and practices we suggest in this book would never work in large enterprises because of governance. “This won’t meet regulatory requirements.” “That doesn’t fit in our change management process.” “Our team can’t have access to servers or production.” These are just a few examples of the many reasons people have given for dismissing the possibility of changing the way they work. When we hear these objections, we recognise that people aren’t really talking about governance; they are referring to processes that have been put in place to manage risk and compliance and conflating them with governance. Like any other processes within an organisation, those established for managing governance, risk, and compliance (GRC) must be targets for continuous improvement to ensure they contribute to overall value. We refer to “GRC teams.” For clarity, our discussion and examples focus on teams that strongly influence how technology can be used within organisations; the more common ones are the PMO, technical architecture, information security, risk and compliance, and internal audit teams.

Governance is about keeping our organisation on course. It is the primary responsibility of the board of directors, but it applies to all people and other entities working for the organisation. It requires the following concepts and principles to be applied at all levels:

  • Responsibility
    • Each individual is responsible for the activities, tasks, and decisions they make in their day-to-day work and for how those decisions affect the overall ability to deliver value to stakeholders.
  • Authority or accountability
    • There is an understanding of who has the power and responsibility to influence behaviours within the organisation and of how it works.
  • Visibility
    • Everyone at all times can view the outcomes achieved by the organisation and its components, based on current and real data. This, in turn, can be mapped to the organisation’s strategic goals and objectives.
  • Empowerment
    • The authority to act to improve the delivery of value to stakeholders is granted at the right level — to the people who will deal with the results of the decision.

Risk is the exposure we run for the possibility of something unpleasant occurring. We all manage risks daily, at work, home, and play. As it is impossible to eliminate every risk, the question to be answered in managing risk is, “Which risks are you willing to live with?”

Compliance is obedience to laws, industry regulations, legally binding contracts, and even cultural norms. The intention of mandated compliance is usually to protect the interest of stakeholders with regard to privacy of information, physical safety, and financial investments.

Management Is Not Governance COBIT clearly explains the difference between governance and management.

  • Governance ensures that stakeholder needs, conditions, and options are evaluated to determine balanced agreed-on enterprise objectives to be achieved; sets direction through prioritisation and decision making; and monitors performance and compliance against agreed-on direction and objectives.
  • Management plans, builds, runs, and monitors activities in alignment with the direction set by the governance body to achieve the enterprise objectives.

Good GRC management maintains a balance between implementing enough control to prevent bad things from happening and allowing creativity and experimentation to continuously improve the value delivered to stakeholders.

Unfortunately, many GRC management processes within enterprises are designed and implemented within a command-and-control paradigm. They are highly centralised and are viewed as the purview of specialised GRC teams, who are not held accountable for the outcomes of the processes they mandate. The processes and controls these teams decree are often derived from popular frameworks without regard to the context in which they will be applied and without considering their impact on the entire value stream of the work they affect. They often fail to keep pace with technology changes and capabilities that would allow the desired outcomes to be achieved by more lightweight and responsive means. This forces delivery teams to complete activities adding no overall value, create bottlenecks, and increase the overall risk of failure to deliver in a timely manner.

In How to Measure Anything, Douglas Hubbard reports Peter Tippet of Cybertrust discussing “what he finds to be a predominant mode of thinking about [IT security]. He calls it the ‘wouldn’t it be horrible if…’ approach. In this framework, IT security specialists imagine a particularly catastrophic event occurring. Regardless of its likelihood, it must be avoided at all costs. Tippet observes: ‘since every area has a “wouldn’t it be horrible if…” all things need to be done. There is no sense of prioritisation.’” When prioritising work across our portfolio, there must be no free pass for work mitigating “bad things” to jump to the front of the line. Instead, quantify risks by considering their impacts and probabilities using impact mapping and then use Cost of Delay to balance the mitigation work against other priorities. In this way we can manage security and compliance risks using an economic framework instead of fear, uncertainty, and doubt.

When GRC teams do not take a principles-based approach and instead prescribe the rules that teams must blindly follow, the familiar result is risk management theater: an expensive performance that is designed to give the appearance of managing risk but actually increases the chances of unintended negative consequences.

Preventive controls, when executed on the wrong level, often lead to unnecessarily high costs, forcing teams to:

  • Wait for another team to complete menial tasks that can be easily automated and run when needed
  • Obtain approvals from busy people who do not have a good understanding of the risks involved in the decision and thus become bottlenecks
  • Create large volumes of documentation of questionable accuracy which becomes obsolete shortly after it is finished
  • Push large batches of work to teams and special committees for approval and processing and then wait for responses

To meet compliance and reduce security risks, many organisations now include information security specialists as members of cross-functional product teams. Their role is to help the team identify what are the possible security threats and what level of controls will be required to reduce them to an acceptable level. They are consulted from the beginning and are engaged in all aspects of product delivery: Contributing to design for privacy and security Developing automated security tests that can be included in the deployment pipeline Pairing with developers and testers to help them understand how to prevent adding common vulnerabilities to the code base Automating the process of testing security patches to systems. As working members of the team, information security specialists help shorten feedback loops related to security, reduce overall security risks in the solution, improve collaboration and the knowledge of information security issues in other team members, and themselves learn more about the context of the code and the delivery practices.

Evolve Financial Management to Drive Product Innovation

In many large enterprises, financial management processes (FMPs) are designed around the project paradigm. This presents an obstacle to taking a product-based approach to innovation. It is relatively easy for small teams to work and collaborate amongst themselves. However, on an enterprise scale, we eventually reach a point where evolution is blocked by rigid, centralised FMPs that drive the delivery and procurement processes that limit the options for innovating at scale.

We consider the organisational financial management practices within enterprises that are typically identified as deterrents to innovation:

  • Basing business decisions on a centralised annual budget cycle, with exceptions considered only under extreme circumstances. This combines forecasting, planning, and monitoring into a single centralised process, performed once a year, which results in suboptimal output from each of these important activities.
  • Using the capability to hit budget targets as a key indicators of performance for individuals, teams, and the organisation as a whole, which merely tells you how well people play the process but not the outcomes they have achieved over the past year.
  • Basing business decisions on the financial reporting structure of capital versus operating expense. This limits the ability to innovate by starting with a minimal viable product that grows gradually or can be discarded at any time. The CapEx/OpEx model of reporting costs is largely based on physical assets and is project based; it does not translate well to the use of information to experiment, learn, and continually improve products over time.

However, in the context of product development, the traditional annual budget cycle can easily:

  • Reduce transparency into the actual costs of delivering value — costs are allocated by functional cost centers or by which bucket the money comes from, without an end-to-end product view.
  • Remove decisions from the people doing the work — the upper management establishes and mandates detailed targets.
  • Direct costs away from value creation by enforcing exhaustive processes for approving, tracking, and justifying costs.
  • Measure performance by the ability to please the boss or produce output — not by actual customer outcomes — by rewarding those who meet budget targets, no matter what the overall and long-range cost may be.

The great planning fallacy, evident in the centralised budget process, is that if we develop a detailed upfront financial plan for the upcoming year, it will simply happen — if we stick to the plan. The effort to develop these kinds of plans is a waste of time and resources, because product development is as much about discovery as it is about execution. Costs will change, new opportunities will arise, and some planned work will turn out not to generate the desired outcomes. In today’s world of globalisation, rapid technology growth, and increasing unpredictability it is foolish to think that accurate, precise plans are achievable or even desirable.

Activity-based accounting (or costing) allows us to allocate the total costs of services and activities to the business activity or product that drives those costs. It provides us with a better picture of the true financial value being delivered by the product.

However, this prevents us from paying attention to the most important questions: did we plan at the right level, set good targets, get more efficient, or improve customer satisfaction? Are our products improving or dying? Are we in a better financial position than we were before?

The traditional process also serves to obscure the true cost of ownership and escalates operating costs. A project will be fully capitalised, allowing us to spread out the reporting of that cost over an extended period, so it has less short-term impact on our profit. However, many of the items that are being capitalised during the initial project have an immediate negative impact on our OpEx, starting before or immediately after the project dissipates. The long-term operating costs required to support the increasing complexity of systems created by projects are not calculated when capitalised projects are approved (because they don’t come out of the same bucket). Ongoing support and retirement of products and services is an OpEx problem. In the end, OpEx teams are stuck with justifying their ever growing costs caused by the bloat and complexity created by CapEx decisions. If we are serious about innovation, it shouldn’t really matter which bucket funding comes from. Open, frank discussion, based on real evidence of the total end-to-end cost of the product, is what we should use as the basis of business decisions. Funding allocation of a product’s development into CapEx or OpEx should be performed by accountants after the business decisions are made.

The first mistake in the typical procurement process is thinking with large amounts of upfront planning, we can manage thge risk of getting something that doesnt delivery the expected value, normally completed using a request for proposal (RFP) which has several negative side effects:

  • Its a poor way to manage the risks of product development
  • It favours incumbents
  • It favours large service providers
  • It inhibits transparency
  • It is inaccurate
  • It ignores outcomes

The second mistake in the typical procurement process is that is assumes all services are equal in both the quality of the people working on the delivery and the quality of the software delivered.

Turn IT into a Competitive Advantage

High-performing IT organisations are able to achieve both high throughput, measured in terms of change lead time and deployment frequency, and high stability, measured as the time to restore service after an outage or an event that caused degraded quality of service. High-performing IT organisations also have 50% lower change fail rates than medium- and low-performing IT organisations.

The practices most highly correlated with high IT performance (increasing both throughput and stability) are:

  • Keeping systems configuration, application configuration, and application code in version control
  • Logging and monitoring systems that produce failure alerts
  • Developers breaking up large features into small, incremental changes that are merged into trunk daily
  • Developers and operations regularly achieving win/win outcomes when they interact

There are two other factors that strongly predict high performance in IT. The first is a high-trust organisational culture. The second is a lightweight peer-reviewed change approval process.

Instead of creating controls to compensate for pathological cultures, the solution is to create a culture in which people take responsibility for the consequences of their actions — in particular, customer outcomes. There is a simple but far-reaching prescription to enable this behaviour:

  1. You build it, you run it.
  2. Turn central IT into a product development organisation.
  3. Invest in reducing the complexity of existing systems. 

While moving to external cloud suppliers carries different risks compared to managing infrastructure in-house, many of the reasons commonly provided for creating a “private cloud” do not stand up to scrutiny. Leaders should treat objections citing cost and data security with skepticism: is it reasonable to suppose your company’s information security team will do a better job than Amazon, Microsoft, or Google, or that your organisation will be able to procure cheaper hardware?

When using COTS, it is crucial not to customise the packages. We can’t emphasise strongly enough the problems and risks associated with customising COTS. When organisations begin customising, it’s hard to stop — but customisations of COTS packages are extremely expensive to build and maintain over time. Once you get beyond a certain amount of customisation, the original vendor will often no longer support the package. Upgrading customised packages is incredibly painful, and it’s hard to make changes quickly and safely to a customised COTS system.

Start Where You Are

First, starting small with a cross-functional team and gradually growing the capability of the product, while delivering value iteratively and incrementally, is an extremely effective way to mitigate the risks of replacing high-visibility systems, while simultaneously growing a high-performance culture. It provides a faster return on investment, substantial cost savings, and happier employees and users. This is possible even in a complex, highly regulated environment such as the government. Second, instead of trying to replace existing systems and processes in a “big bang,” the GDS replaced them incrementally, choosing to start where they could most quickly deliver value. They took the “strangler application” pattern and used it to effect both architectural and organisational change. Third, the GDS pursued principle-based governance. The leadership team at GDS does not tell every person what to do but provides a set of guiding principles for people to make decisions aligned to the objectives of the organisation. The GDS governance principles state:

  1. Don’t slow down delivery.
  2. Decide, when needed, at the right level.
  3. Do it with the right people.
  4. Go see for yourself.
  5. Only do it if it adds value.
  6. Trust and verify.

People are trusted to make the best decisions in their context, but are accountable for those decisions — in terms of both the achieved outcomes and knowing when it is appropriate to involve others.




AWS Associate Solutions Architect Study Notes

Please see my study notes for the AWS Associate Solutions Architect exam, to help me prepare and pass the exam I used the following services:

My study notes have been exported from Evernote into a PDF (70 pages, approx. 20MB), you use the PDF viewer below, or download the PDF directly here.

AWSAssociateSolutionsArchitectStudyNotes-09-07-2019-v1



Tony Robbins – Money Master The Game Summary

Tony’s passion to bring the complex, often unknown area of finance, investment, wealth generation and make available so the ordinary person can consume shines through in Money Master The Game. A fantastic book filled with knowledge, wisdom and insight into the financial industry, what it takes to generate wealth, earn from assets, key financial instrument strengths & pitfalls, hidden fees & taxes (which eat away at your earnings), debunks common financial myths and provides key insights & investment strategies from some of the most successful investors around the world – including Ray Dalio, John Bogle, Mary Erdoes, Carl Icahn, Steve Forbes, Marc Benioff, David Swensen, Paul Jones, Marc Faber, John Templeton etc. Tony ties in his coaching and motivational background, including detailed insights into human emotional needs, driving motivations, market psychology and impacts on investments.

Principles

  • No one can predict the market successfully & consistently over time. Markets go up and down over time and most opportunities exist in bear markets when everyone else is exiting/selling and confidence is at its lowest
  • Develop a deep understanding of the concept and benefit of compounding savings / investments, save & invest for the long run to generate wealth
  • Avoid mutual funds and excessive fees, look to low cost broad index funds (like S&P 200) which will likely outperform mutual funds at a much lower cost
  • Create a diversified portfolio with a Risk/Growth bucket & Security bucket with approximately 60/40 asset allocation, invest using the dollar-cost averaging strategy and regularly rebalance your portfolio; have your portfolio diversified across markets, assets and time.
  • Develop financial goals; create a financial plan leading to retirement / financial independence and execute.

Key insights from the book

Understand that financial markets / investment is a zero sum game – in order to gain, someone must lose. If you don’t know what you’re doing, someone will eventually take your money. Nobody can predict what the markets will do – sometimes people get lucky, other times get overconfident and end up loosing big.

Look for investments which provide asymmetric risk / reward – A big return for little exposure.

Give back, show gratitude, chose the abundance mindset over the scarcity mindset. Give so you shall receive, help others and the benefits will come.

It’s not what you earn, it’s what you keep. The only certainty in life is death & taxes!

Taxes, if not legally minimised can have a significant impact on your compounding investment, see example in figure 1 with one dollar invested, doubling each year for 20 years (i.e. compounding). Wealth generated with no taxes or fees after 20 years is a staggering $1,048,576. Running the same numbers with a 33% tax each year after 20 years, wealth generated is a miserly $28,466. A good illustration on why we should try to minimise taxes and fees on our wealth, in this example a $1,020,110 difference in wealth after 20 years.

Starting with one dollar, doubling each year without tax and with tax

Figure 1. Starting with one dollar, doubling each year without tax and with tax

You’ll never become wealthy by simple working for a pay check, or working harder, smarter or longer –  a mistake millions of people make. Every working person is already a financial trader – trading their time for money – about the worst trade you can make as you can never get back time.

Create a money machine – get money working for you (rather than you working for money) which generates wealth as you work, sleep and have fun. Create a Freedom Fund – regularly save money to invest and generate wealth. Put a select percentage of each pay check into your Freedom Fund before you spend any if it (i.e. pay yourself first) – i.e put 10%, 15% or 20% e.t.c. (whatever number make sense for your situation) of each pay-check in, don’t miss a payment and don’t raid this account for anything other than to investment. Even better if you can automate the Freedom Fund transfer with each pay.

Sir John Templeton – “you find the bargains at the point of maximum pessimism – there’s nothing — nothing — that will make the price of a share go down except the pressure of selling”.

Financial markets are like Earth’s seasons – after a financial winter (negative outlooks, loses, decreasing valuations, bear markets), comes the financial spring (positive outlooks, profits, increasing valuations, bull markets) and markets go in cycles – however no one can successfully and consistently predict these cycles. As Jack Bogle said “Don’t do something, just stand there!”, like waves in an ocean, become the market and move with it, rather than trying to beat it – see Ray Dalio’s All Weather Investment Portfolio below on how to achieve.

Develop a deep understanding of compounding. Compounding investment is one of the great wonders of the financial world, see figure 2 – would you rather receive one million dollars up front or start with one cent and double each day for a month?
Most people chose $1m up front, however choosing the second option of a cent doubling for 30 days leaves you with a massive $10,737,418.24.

Value of compounding investment, starting with one cent, doubling each day after 30 days
Figure 2. Shows the value of compounding investment, starting with one cent, doubling each day after 30 days leaves you with $10,737, 418.24.

Be very careful of financial brokers and mutual funds – both likely have large fees and will put a significant drain when building your financial wealth. The most basic deal is you put up all the capital (i.e. for investment in a mutual fund), take on all the risk, incur 100% of any loses, yet the financial brokers & mutual funds still take fees regardless of the investment outcome. It’s a dud deal, hyped up with a lot of marketing and public relations. Often they can take 2-3% of your investment returns, which over a lifetime add up to many $100,000’s of dollars as seen in figure 3, over a 6o year lifespan, with an initial investment of $10K (returning 7%) shows no fees and with a 2.5% fee – difference between retiring with $579K or $140K (a whopping $439,190 of fees).

Impact of 2.5% fee on an initial investment of $10K with a 7% return over a 60 year lifespan

Figure 3. Shows the impact of 2.5% fee on an initial investment of $10K with a 7% return over a 60 year lifespan

There are two phases of your wealth generation and investment lifespan:

  • The accumulation phase where you build up enough assets and re-invest all earnings (i.e. climb the mountain)
  • The decumulation phase where you withdraw income from asset earnings (i.e over the mountain pinnacle and down) – retirement or financial independence

In the past 100 years, the stock market was up approx. 70% of the time, and down 30% of the time, hence the need to build an investment strategy / asset allocation with more than just stocks (high risk / reward). Stocks have been the best place for growth over time, however stocks are volatile.

Financial Myths

The 9 financial myths to be aware of:

Myth 1: The $13T lie: “Invest with us. We’ll beat the market!”

  • Investment brokers and mutual funds fail to beat the market and simple index funds – “An incredible 96% of actively managed mutual funds fail to beat the market over any sustained period of time!”. Not only will the charge excessive fees (2-3%) reducing your return on investment, but will likely not outperform the market.
  • An index fund is a simple list of stocks – i.e. the S&P 500, which track closely (mimics) to the performance of the markets its tracking. When investing in an index fund, you own part of the fund, not the underlying companies which the index fund has invested in – a very simple, low fee way to diversify.
  • Warren Buffets won a $1 million wager (for charity) in which he bet professional stock picker Protege Partners couldn’t outperform the Vanguard S&P 500 index fund over a 10 year period. The bet ended early with the Vanguard S&P 500 index fund returning 7.1% per year, whilst Protege Partners could only return 2.2% per year.

Myth 2: “Our Fees? They’re a small price to pay!”

  • The average cost of owning a mutual fund is 3.71% per year i.e. if you had a return on investment of 8% per year, with mutual funds fee would only be 4.29% return!
  • As a counter to mutual funds, you could own an index fund for as little as 0.14% per year i.e. if you had a return on investment of 8% per year, with an the index fee would be 7.86% return!

Myth 3: “Our Returns? What you see is what you get”

  • Be careful with when mutual funds & investment brokers quote rate of returns. Depending on time of investment, ongoing contributions & withdrawals and how easy it is to mislead with statistics, look for dollar-weighted returns which is what you actually get, rather than time-weighted returns which are used for fund marketing and promotion.

Myth 4: “I’m your Broker, and I’m here to help”

  • Brokers often don’t have your best interests in mind (often working for their company needs or select financial products), potentially conflicted and they only have to provide a suitable product. Look for a financial fiduciary who is legally bound to provide the best advice and disclose any conflicts of interest
  • A 2009 Morningstar study found 49% of brokers don’t own any portion of the funds they manage

Myth 5: “Your Retirement is just a 401(k) away”

  • “You can’t save just three percent of your income for thirty years and expect to live another thirty years in retirement with the same income you had when you were working” – John Shoven, professor of economics at Stanford.
  • A lot of 401(k) plans are actively managed mutual funds – which are high in fees, significantly reducing returns
  • 401(k) plans have some tax benefits, however are often loaded up with as much as 17 additional fees
  • Look for low fee 401(k) plans and research the Roth 401(k) to reduce future tax shocks

Myth 6: Target-date funds: “Just set it and forget it”

  • Target-date funds allows you to pick a date when you’ll retire and have the fund manager invest and reduce risk over time (i.e. reduce stocks, increase bonds) as you get closer to retirement
  • There are risk and pitfalls with Target-date funds – they do not guarantee your assets / wealth (common misunderstanding), they can drop in value when you’re ready to retire and be aware of fees

Myth 7: “I hate annuities and you should too”

  • Income guaranteed annuities are offered by financial companies who take an initial lump sum deposit, invest it and at a later time when it matures, will pay you a stream of income at regular intervals. Some annuity products can be setup for payments for the lifetime of the investor (or dependent), or for a fixed period (i.e. 20 years of payments) and can offer a good, principle protected, guaranteed retirement plan.
  • Stay away from variable annuity products – which are tied to market performance (i.e. can reduce your payments if the market tanks) – prefer fixed annuity products.
  • Be aware of annuity fees, especially those who invest in mutual funds (fees on top of fees) – can upwards of 4.7% in fees.

Myth 8: “You gotta take huge risks to get big rewards!”

  • Beware of speculation – any investment which doesn’t promise safety of principle and an adequate return are speculative.
  • Look for opportunities “which provide asymmetric risk/reward” i.e. great returns for little risk.
  • Some financial products which offer good returns for low risks are:
    • Structured Notes – a loan to a bank
    • Market Linked Certificates of Deposit (CDs) – Similar to structured notes, but are linked to market performance (so if the market goes up, you get a piece of the action) and guaranteed by the Federal Deposit Insurance Corporation (FDIC)
    • Fixed Indexed Annuities – 100% principle protection, create a fixed income stream with an initial lump sum deposit maturing after a select period

Myth 9: “The lies we tell ourselves

  • “The ultimate thing that stops most of us from making significant progress in our lives is not somebody else’s limitations, but rather our own limiting perceptions or beliefs”
  • “If you want to change your life you have to change your strategy, you have to change your story, and you have to change your state”

Financial Independence

The 5 different levels of financial dreams:

  1. Financial Security – how much do you need to cover your basic needs such as mortgage / rent, utilities, food, transportation and insurance per year?
  2. Financial Vitality – a goal marker representing 50% (half) of the little extras / luxuries such as clothes, entertainment, dining out, music, gym membership – life goodies!
  3. Financial Independence – The ultimate goal – having the lifestyle you do today, funded entirely by the income generated from your assets. You don’t need to work for money at this point!
  4. Financial Freedom – Having more than you have today, a two to three additional luxuries such as a holiday house, long vacations, bigger home, donations etc without having to work to pay for them
  5. Absolute Financial Freedom – Having and doing anything you want, without having to work for it.

Everyone will have different ambitions, some will be happy with Financial Security, whilst others may aim for Absolute Financial Freedom. I believe there are only two very clear and realistic financial goals from the above list; Financial Security & Financial Independence that will lead to a life of sustainable abundance and happiness. To achieve the above financial dreams, there are three key steps:

  1. Unleash your hunger and desire, and awaken laser-like focus – “you become inspired by something that excites you so much that your desire is completely unleashed”, obsessed!
  2. You take massive and effective action – “and adapting your approach whenever it doesn’t work and trying something new, you will move toward your dream”
  3. Grace – “Gratitude connects you to grace, and when you’re grateful, there is no anger. When you are grateful, there is no fear”. Good things will happen. The universe will help you achieve your goals.

Create a plan that works for you, and stick to it. Don’t follow another persons plan or goals; you’ll lose, follow your own. “The race of life is a marathon, not a sprint” and “it doesn’t matter where we start. It’s how we finish that counts”. From Jim Rohn – “What you get will never make you happy; who you become will make you very happy or very sad”. “The only person you should try to be better than is the person you were yesterday”.

Accelerating Wealth Generation – Speed It Up

1. Save more and invest the difference

  • Every time you get a pay rise, save the difference (from your old wage i.e don’t just spend it)
  • Pay your home loan off faster to reduce total interest payment (by paying next months principle with each repayment).
  • Look to drive older cars and avoid car-loans, save up repayments and buy a new (see Dave Ramsey video).
  • Weight up the cost / benefit of those little, constant expenditures each day (i.e. take-away coffee’s, lunches out etc) – are they really worth it over the long run?
  • Create a budget and brainstorm how you can reduce your expenses

2. Earn more and invest the difference

  • Work out a way to do more for others, to be creative, invest in yourself and work out how to become more valuable
  • Find a side hustle and make money, turn a hobby / passion into profit, find a problem to solve and solve it

3. Reduce fees and taxes (and invest the difference)

  • It’s not how much you make, it’s how much to get to keep (after taxes). Look to legally minimise your tax burden, talk to a professional tax accountant

4. Get better returns and speed your way to victory

  • Look for asymmetric risk/reward investments
  • Learn the power of asset allocation and diversify your portfolio to reduce risk & increase returns
  • At 4% return on investment your money doubles every 18 years; At a 10% return on investment, your money doubles every 7.2 years – see figure 4. Rule 72 is an easy way to quickly workout how long it will take (compounding) for your investment to double.
Years it takes for your wealth to double based on different return on investment rates

Figure 4. Years it takes for your wealth to double based on different return on investment rates

5. Change your life – and lifestyle – for the better

  • Can you move to a cheaper location, house prices and cost of living whilst maintaining a decent income?
  • Can you move to a state with less taxes?

Investment Strategy

Asset Allocation

“Asset allocation is the most important investment decision of your lifetime, more important than any single investment you’re going to make in stocks, bonds, real estate, or anything else”. “Anybody can become wealthy; asset allocation is how you stay wealthy”. Asset allocation is dividing up your money and investing in different types of investments (such as stocks, bonds, commodities, cash, real estate etc) including the portion you invest in each of these areas. Each of these different assets have different risk levels, returns, liquidity, investment horizons and diversification through asset allocation doesn’t cost you anything, but can significantly reduce your investment risk. Ideally you should diversify across securities, across asset types, across markets (domestic, international) and across time. One of the best way to diversify equities (stocks) are through index funds or exchange traded funds (ETF) as then you own a piece of the whole market – 100’s of companies combined.

According to David Swensen, there are only three tools for reducing risk and increasing returns:

  1. Security selection – stock picking
  2. Market timing – short-term bets on the direction of the market
  3. Asset allocation  – long term strategy for diversified investing

Broadly speaking, its best to have two asset allocation buckets and a spend bucket:

  1. Security Bucket – where you keep low risk investments to protect your money and prevent loss. This is the money you don’t want to lose and slowly grow for peace of mind. Very low risk assets below in this bucket such as cash, cash equivalents, bonds, certificate of deposit, your home, your pension, annuities, your life insurance policy, structured notes (secure types)
  2. Risk / Growth Bucket – where you play to win, where you aim for high returns, however the investment risks and potential loses are much higher. Any of your investments which carry higher risk belong in this bucket such as equities (stocks), high yield bonds (junk bonds), real estate (including REITs), commodities, currencies, collectables, structured notes (riskier types)
  3. Dream Bucket – Put some of your income, investment wins, bonus / savings into this bucket to spend now, to reward yourself and enjoy life in the present

For this post we’ll assume a portfolio allocation of 60% in the Risk / Growth bucket, and 40% in the Security bucket, with your dream bucket being filled from the spoils of your investments and other income streams.

Financial buckets - security, risk/growth, dream

Figure 5. Different financial buckets to diversify your wealth

Market Timing

Many say timing the market is critical, however nobody can consistently and successfully predict the market; it fluctuates up and down like waves in the ocean. We often invest money into the market when everybody is – when the market is high and at exactly the wrong time. The best time to invest in markets, the best opportunity is at the time of maximum pessimism – when everyone is selling. “Be fearful when others are greedy, and greedy when others are fearful” – Warren Buffett. Rather than trying to time the market, you can use a technique called dollar-cost averaging, which helps you diversify investment across time. To dollar-cost average, simply make equal investments on a set time schedule (i.e. weekly, monthly, quarterly) into each of your buckets which will protect you against market fluctuations (high, low, flat). The volatility in the market helps this strategy and its best suited for regular contributions over a longer time period. It may not be the best approach if you have a lump-sum to invest.

Portfolio Rebalancing

The pattern you want to avoid is fixing your portfolio / buckets and not changing them over time. In order to maximise returns, protect your capital and keep some of your growth, you should regularly review and rebalance your portfolio to ensure it sticks to the Security and Risk/Growth ratio you’ve implemented. Over time your Risk / Growth bucket may take off and grow in value disproportionally and throw out your balance, as an example in figure 6, assuming a 60/40 split between Risk / Growth & Security buckets. If your initial portfolio worth was $1000 with $600 (60%) in Risk/Growth bucket and $400 (40%) in Security bucket, and all of a sudden the sharemarket grows, your Risk/Growth bucket is now worth $810 (67%), and your Security Bucket still at $400 (33%) throwing out your portfolio ratio.

In order to rebalance your portfolio, you can divert your regular contributions into your Security bucket (i.e $140 invested in Security bucket to bring back up to 40%), redirect the profits from Risk/Growth bucket or even sell some of the Risk/Growth bucket assets to put back into the Security bucket. You can rebalance your portfolio quarterly, half-yearly or yearly, but be aware of any tax implications (such as owning equities for less than a year and potentially benefiting from tax-loss harvesting). The concept of portfolio rebalancing is to reduce risk, protect assets and ensure when the market rises (in your favour), you secure some of the gains in your Security bucket (sell high, buy low), more details on portfolio rebalancing can be found on Investopedia.

 

Portfolio rebalancing after Risk/Growth assets increase in value

Figure 6. Portfolio rebalancing after Risk/Growth assets increase in value

All the investment experts interviewed in the book explained the importance of never losing money, it is much harder to gain back loses to break even.

Gains required to break even if you lose on investments.

Figure 7. Gains required to break even if you lose on investments.

Ray Dalio All Weather Portfolio

A must watch video by Ray Dalio is How The Economic Machine Works by which explains the economics of the All Weather Portfolio. The All Weather Portfolio has been back tested over the last 100 years of financial history and has show a great returns at minimised risk. Ray suggests its very hard to time the market, so the average investor should not try to time the market (its just like playing poker), as professional investors with millions of dollars of resources, working around the clock are also trying to time the market – don’t try to out compete.

Figure 8. How The Economic Machine Works by Ray Dalio

Stocks / shares are 3 times more risky (aka volatile) than bonds

Portfolio balance should consider two dimensions:

  • Investment amount & growth potential
  • Risk

Understand the market and individual investments have good times, and bad times and generally there are only four things which move the price of assets:

  1. Inflation
  2. Deflation
  3. Rising economic growth
  4. Declining economic growth

These makes up the different financial seasons and ideally you should have 25% of your risk spread across them.

The four seasons and which investment type will do well

Figure 9. The four seasons and which investment type will do well

 

Ray Dalio’s All Weather Portfolio (also known as All Seasons) Recommendation

  • 40% long-term bonds
  • 30% stocks
  • 15% intermediate-term bonds
  • 7.5% gold
  • 7.5% commodities

Rebalance portfolio at least annually.

Ray Dalio All Weather Portfolio split

Figure 10. Ray Dalio All Weather Portfolio split

Tony Robbins back tested from 1927 the All Weather Portfolio and found since then had lost money only 14 times with an average loss of 3.65% compared with S&P at 24 times with an average lose of 13.66%. When the All Weather Portfolio was tested during the 30 years from 1984 to 2013 it returned an averaged annualised return of 9.72%, making money 86% of the time and the average loss was only 1.9% with a standard deviation of only 7.63% (meaning low risk and low volatility).

All Weather Portfolio links

Creating Your Lifetime Income Plan

Once you’ve built up enough wealth and looking towards retirement, you need to consider how to generate income from your assets – “you can’t spend assets, only cash”. The goal of financial independence is being able to generate sustainable income from your assets, without significantly diminishing your wealth and with low risk/volatility.

The 4% rule on retirement withdrawals from portfolio is dead, developed in the early 90’s however did not perform well in the 2000’s in which you would have lost 33% of your wealth and have only a 29% chance that your money would last your lifetime.

  • 4% rule suggests how much you should withdraw from your portfolio once retired (including adjusting for inflation)
  • 25 times rule estimates how much you’ll need to retire

See some links below explaining the rules, potential flaws and a part about inflation adjusted retirement figure:

You must be careful when getting close to retirement and what impacts the current market conditions could have on it – you could be a “happy camper”, or a “homeless camper”. The risk facing retirees and their wealth is called sequence of returns – in essence, market conditions & ROI during the earliest years of your retirement will define your later years. You could effectively do everything right during your wealth generation phase, and right as you retire things could go pear shaped and impact your retirement and wealth longevity. More details on sequence of returns can be found here.

Annuities To Provide Secure Income

Annuities are a financial product that pays out a fixed stream of payments to an individual and can be used as an income stream for retirees. Annuities are generally sold by financial institutions which can accept a lump sum investment, or a slower, gradual investment into the annuity during the accumulation phase. Upon annuitisation (annuitisation phase), payouts begin to the individual. The financial institutions will take your money and invest it (at their risk), whilst guaranteeing you a return on investment / income stream for the term of the annuity (and sometimes for life). There are two main types of annuities: immediate annuities and deferred annuities. Deferred annuities come in a few forms:

  1. Fixed annuity – you get a fixed, guaranteed rate of rate every year
  2. Indexed annuity – your rate of return is tied to how the stock market performance (you often get a % of upside, with no downside)
  3. Hybrid indexed annuity – similar to indexed annuity, except with a lifetime income (literally until death)
  4. Variable annuity – not recommended, very expensive and often include mutual fund fees and insurance company fees

Annuities:

  • Protect your wealth
  • Provide higher earnings than CDs or bonds
  • Provide 100% guarantee of your principle
  • Can provide income insurance / guaranteed income for life (if annuity product allows)

“Annuities in many ways are the antidote to the problem of sequence of returns”. More detail on annuities including types, conditions, payments, principle, income riders, insurance, death, inheritance etc can be found here:

Meet The Masters

All experts interviewed shared at least four common obsessions:

  • Don’t Lose Money; defence is more important than offence
  • Risk a Little to Make a Lot; seek opportunities with asymmetric risk/reward
  • Anticipate and Diversify
  • You’re Never Done

David Swensen Individual Portfolio Recommendation

  • 20% in Domestic stock index funds (Risk/Growth bucket)
  • 20% in International stock index funds (Risk/Growth bucket)
  • 10% in Emerging stock market index funds (Risk/Growth bucket)
  • 20% in Real Estate Investment Trust index funds – REITs (Risk/Growth bucket)
  • 15% in Long-term US Treasuries (Security bucket)
  • 15% in Treasury Inflation Protected Securities -TIPS ( Security bucket)

The above portfolio has two security bucket investments, the long-term US treasuries will protect against deflation scenarios and the treasury inflation protected securities (TIPS) protect against inflation scenarios. The portfolio is heavily weighted in the Risk/Growth bucket (70%) compared to the Security bucket (30%) and is aimed for the long term horizon (given performance of equities over the long run far out weighs other investment types).

Jack Bogle’s Portfolio Principles & Portfolio

  1. Asset allocation in accordance with your risk tolerance and your objectives
  2. Diversify through low-cost index funds
  3. Have as much in bond funds as your age

Jacks Portfolio

  • 60% in stocks (mostly Vanguard stock index funds)
  • 40% in bonds (mostly Vanguard Total Bond Market Index and tax-exempt municipal bond funds)

Warren Buffett

Looks for undervalued companies, looks to buy companies with the expectation it will rise in price over time

Value investing – invest in things that are useful and serve some purpose and that supply some practical need that people have

Active fund investment management is a losing bet

Prefer assets which create wealth and generate a return

Invest in index funds that give exposure to the broad market and hold them for the long term

Paul Tudor Jones

Defence is ten times more important than offence

You always want to be with whatever is the prominent trend is; don’t be a contrarian investor.

The metric for everything Paul looks at is the 200 day moving average of closing prices. One principle is to get out of anything that falls below the 200 day moving average.

Work on a five to one asymmetric risk reward ratio for investment i.e. I’m risking one dollar to make five dollars. Paul can be wrong 80% of the time and still not lose (be even) with this investment strategy.

Marc Faber

“The most money made is by doing nothing, sitting tight. If you don’t see really good opportunities, why take big risks? Some great opportunities will occur every three, four or five years, and then you want to have money”.

You have to be very careful about buying things are a high price, because they can drop. You have to keep your cool and have money when your neighbours and everybody else is depressed.

You don’t want to have money when everyone else has money, because then everyone else competes for assets, and they are expensive

Marc Faber’s previous asset allocation

  • 25% in stocks
  • 25% in gold
  • 25% in  cash & bonds
  • 25% in real estate

Marc Faber’s current (rough) asset allocation

  • 20% in stocks
  • 25% in gold
  • 35% in  cash & bonds
  • 30% in real estate

(More than 100%)

Compounding Investment Calculator

You can use the below compound investment calculator to work out how your wealth will grow over time, assuming you will start with an initial deposit (ie $10K), and invest $12K per year over 15 years with a compounding return on investment of 7.5%. You can download my Microsoft Excel spreadsheet and play around with the numbers to see the impact on your wealth over time and there are also many online return on investment calculators such as Money Smart Compound Interest Calculator.

Compounding investment calculator, showing an initial $10K investment, ongoing 12K investment per year compounding for 15 years with a 7.5% ROI

Figure 11. Compounding investment calculator, showing an initial $10K investment, ongoing 12K investment per year compounding for 15 years with a 7.5% ROI

You can download my MoneyMasterTheGame-09-07-2019-v1.xlsx Microsoft Excel spreadsheet with all figures, tables, calculations and compound investment calculator here.

You can view more great content from Tony Robbins on his youtube channel.




Top 8 Books for Technical Leaders

We’re always in the search to develop and maintain high performing teams, reduce risk, improve quality and increase speed to market. The below books have helped in the journey to become a better information technology / software professional and technical leader. The books discuss productivity, leading / managing knowledge workers in creative work, growing technical leadership skills, agile leadership, lean thinking, change & financial management.

Peopleware: Productive Projects and Teams – Tom DeMarco & Tim Lister

Peopleware - Productive Projects and Teams 

Peopleware is a great book for leaders of knowledge workers and creative environments. Through their experience and research, DeMarco and Lister provide examples on how to enable productive teams and describes common productivity killers. Peopleware discusses:

  • How to keep staff happy, retention high, burnout low.
  • How to setup your team environment for maximum productivity.
  • How to create a learning culture.
  • Setting up the office and work environment to maximize flow time and team work.
  • Leadership, management, goal alignment and networking.
  • Factors that will lead to teamicide – i.e. breaking teams.
  • Creating a culture of transparency and trust.
  • Empowering people to define methods most appropriate for their work, rather than strict adherence to prescribed methodologies.
  • Tips for effectively and pragmatically managing risk and change management.
  • How to run effective meetings, avoid ceremonies and ensure working meetings have outcomes and are efficient and effective.
  • How to grow community and culture, make work fun, enable innovation, keep motivation high and teams happy.

You can find a detailed Peopleware: Productive Project and Teams summary here.

You can review and purchase Peopleware: Productive Projects and Teams on Amazon.com.au.

 


The Manager’s Path: A Guide for Tech Leaders Navigating Growth & Change – Camille Fournier

The Managers Path

<<< Summary In Progress >>>

<<< Summary In Progress >>>

<<<Summary In Progress >>>

You can review and purchase The Managers Path: A Guide for Tech Leaders Navigating Growth and Change on Amazon.com.au.


Leading Geeks – How to Manage and Lead the People Who Deliver Technology – Paul Glen

Leading Geeks

<<< Summary In Progress >>>

<<< Summary In Progress >>>

<<< Summary In Progress >>>

 

You can review and purchase Leading Geeks – How to Manage and Lead the People Who Deliver Technology on Amazon.com.au.


Management 3.0 Leading Agile Developers, Developing Agile Leaders – Jurgen Appelo

Management 3.0

Management 3.0 is a book designed for the agile manager / leader and is presented via a synthesis of theory, science, nature and practical real world experience of Jurgen Appelo. Through understanding the differences between ordered systems (i.e. predicable) and complex systems (i.e. unpredictable), better structures, processes, methods and approaches can be used to manage, lead & motivate teams, reduce risk and increase the chance of success. Management 3.0 discusses:

  • The differences between ordered-systems and complex-systems and selecting the right method for your environment and challenge
  • Improving chances of success by embracing a constant flow of failure, learning & evolving cycles
  • Reviews existing agile methods such as RAD, Scrum, XP,  Lean, CMMI, PMBOK, Prince2, RUP and some of their limitations including the CHAOS report on project failure.
  • Discusses the role of complexity – the state between order & chaos where innovation & creativity thrives.
  • Highlights the importance of social networks within an organisation and osmotic communication (overhearing conversations & information). Connectivity of an individual & team is one of the best predictors of performance.
  • How to manage a creative environment including core tenants of safety, play, work variation, creative visibility,  and an environment which challenges the comfort zone.
  • How to create an environment of motivation for knowledge workers
  • Enabling self organising teams which are best suited for complex/dynamic environments, delegating and enabling decisions to be made at the right level (where the knowledge resides). Through aligning constraints, setting boundaries & protecting the environment, managers can define direction and shared goals of autonomous teams.
  • How to develop competence within individuals & teams and capture key project performance metrics for feedback loops
  • Communication and feedback ideas
  • Organisational structure, team size & makeup and generalising specialist / t-shaped individuals
  •  Embracing continuous change in search for system optimisation through adaption, exploration & anticipation.
  • Continuous improvement through plan – do – check – act cycle and similar models. Often as the team tries different methods to improve productivity it will take one step back and two steps forward.

You can find a detailed Management 3.0 summary here.

You can review and purchase Management 3.0 Leading Agile Developers, Developing Agile Leaders on Amazon.com.au.


The Lean Startup: How Constant Innovation Creates Radically Successful Businesses – Eric Ries

The Lean Startup

<<< Summary In Progress >>>

<<< Summary In Progress >>>

<<< Summary In Progress >>>

You can review and purchase The Lean Startup: How Constant Innovation Creates Radically Successful Businesses on Amazon.com.au.


Leading Change – John P. Kotter

Leading Change

<<< Summary In Progress >>>

<<< Summary In Progress >>>

<<< Summary In Progress >>>

You can review and purchase Leading Change on Amazon.com.au.


The Mythical Man-Month – Fred Brooks

The Mythical Man Month

<<< Summary In Progress >>>

<<< Summary In Progress >>>

<<< Summary In Progress >>>

You can review and purchase The Mythical Man-Month on Amazon.com.au.


Beyond Budgeting – Jeremy Hope

Beyond Budgeting

<<< Summary In Progress >>>

<<< Summary In Progress >>>

<<< Summary In Progress >>>

You can review and purchase Beyond Budgeting on Amazon.com.au.


 

I hope you’ve found the above books useful. Have I missed any books you think software / information technology professional and leaders should read?
Would love to hear your thoughts and feedback below…

 

* As an Amazon Associate I earn from qualifying purchases…



Principles Of Software Development

A set of guiding principles for software development, applying rule of thumb over strict governance.

P1:    Build in the simplest way possible (KIS).

P2:    Prefer working in smaller increments, build for fast feedback, refactor as necessary. Apply the rule of 3.

P3:    Be a commercial developer (consider build cost, support cost & total cost of ownership) and provide regular updates on progress.

P4:    Be flexible in your approach depending on problem at hand – prototype / spike / hack for early customer or technical feedback and build solid, testable, maintainable, clean & quality code once feature/concept proven.

P5:    Apply the Testing Pyramid approach to quality assurance

P6:    Pick the best tool / technology / approach for the job at hand. Consider optimising for the whole; globally rather than locally.

P7:    Apply 12 factor app design, with architecture emerging. Consider the *ilities, make trade-offs visible as shouldn’t necessarily design for all – see fitness function fit.

P8:   Collective (collaborative) code ownership – the sum of all experiences leads to better software.

P9:   Follow Robert C. Martin’s ’the boy scout rule’: leave the code better than you found it.

P10:    Follow the Agile Documentation Manifesto. Prefer working software over documentation.

P11:  Replace manual processes with automation – automate all the things, reduce waste, improve throughput.

P12:   Be disciplined – taking shortcuts / taking on technical debt can be an option short term, but left unpaid almost always leads to poor longer term outcomes; a reduction in team productivity, cost-effectiveness and increased risk.

P13:  Work at a sustainable pace, limit work-in-progress (WIP), stop starting and start finishing.

P14:  Design for failure (error driven design); consider all the things that could go wrong such as hardware failures, network failures, database failures, system slowness, upstream & downstream system failures, cancellations, time-outs, non-happy-day user flows etc.

 




Agile Documentation Manifesto

Agile documentation should:

  1. Keep it simple (KIS), keep it lean (KIL).
  2. Clear and unambiguous.
  3. Lightweight, dot points, single sentences, to the point.
  4. Consistent presentation with whitespace.
  5. Well designed, organized, structured.
  6. Prefer diagrams over words.
  7. Easily searched and navigated.
  8. Easily updateable.
  9. Just in time (JIT).
  10. Stable (dont document rapidly changing work).

Ideally documentation should convey information clearly, concisely, reduce information silos (i.e. knowledge only known by an individual/team) and reduce re-learning time across teams & organisations.

 

 


Reference Material

Extract from  Agile Modelling:

Agile documentation principles

  • The fundamental issue is communication, not documentation.
  • Agilists write documentation if that’s the best way to achieve the relevant goals, but there often proves to be better ways to achieve those goals than writing static documentation.
  • Document stable things, not speculative things.
  • Take an evolutionary approach to documentation development, seeking and then acting on feedback on a regular basis.
  • Prefer executable work products such as customer tests and developer tests over static work products such as plain old documentation (POD).
  • You should understand the total cost of ownership (TCO) for a document, and someone must explicitly choose to make that investment.
  • Documentation should be concise: overviews/roadmaps are generally preferred over detailed documentation.
  • Travel as light as you possibly can.
  • Documentation should be just barely good enough.
  • Comprehensive documentation does not ensure project success, in fact, it increases your chance of failure.
  • Models are not necessarily documents, and documents are not necessarily models.
  • Documentation is as much a part of the system as the source code.
  • The benefit of having documentation must be greater than the cost of creating and maintaining it.
  • Developers rarely trust the documentation, particularly detailed documentation because it’s usually out of sync with the code.
  • Ask whether you NEED the documentation, not whether you want it.
  • Create documentation only when you need it at the appropriate point in the life cycle.
  • Update documentation only when it hurts.

 

When is a document agile?

  • Agile documents maximize stakeholder ROI.  
  • Stakeholders know the TCO of the document.  
  • Agile documents are “lean and sufficient”.
  • Agile documents fulfill a purpose.  
  • Agile documents describe “good things to know”.
  • Agile documents have a specific customer and facilitate the work efforts of that customer.  
  • Agile documents are sufficiently accurate, consistent, and detailed.