User Tools

Site Tools


how_do_we_know_where_we_are_spending_our_money

This is an old revision of the document!


How Do We Know Where We Are Spending Our Money?

The closer you get to the board level of (or any executive level) in an organization the more interest there is in answering the question “Where are we spending our money?” Executives are not interested in looking back on a report. Rather they are trying to understand whether the organization is using money wisely when looked at from the different perspectives that they have.

For organizations, the process is similar to those personal finance “spreading reports” you get from credit card companies at the end of the year. You may be surprised at how much money you have spent on “Alcoholic Beverages”, but the real question you ask yourself is “should I be spending less (or more) on this category?”

While many Agile-ists just want to say “Trust us; we are working on the most important thing every time”, this really is not responsible management from the perspective of the executives, nor sufficient for good corporate governance. The reason is that there are many different perspectives of the use of money that need to be looked and there is no (single) way of determining priority that would take into consideration these multiple perspectives.

What Types of Categories Do We Often See?

Organizations typically need to see their work from multiple different perspectives or dimensions. Organizations generate these reports / charts using these different perspectives, each repressing 100% of a particular dimension. The data is most useful when it shows trends over time for a particular dimension, as opposed to a single point in time.

Here are some dimensions I’ve seen organizations track:

Product Life Cycle Horizon investment

Organizations will often be interested in how much they are investing in different time horizons of a product / solution. Are we investing the right amount in evaluating new products, in comparison to retiring existing products. Example tags (using standard SAFe discussion) on (for example) Epics might include:

  • Evaluating: investments aimed at potential new solutions, where we will get to a “stop, pivot or persevere” decision.
  • Emerging: evaluation will lead to identification of some promising new solutions that we want to continue to invest.
  • Investing: investments requiring significant on going investment because of volatility in the environment.
  • Extracting: investments that are part of our stable offering.
  • Retiring: investment required to decommission a deployed solution

Centralized vs Decentralized Decision Making

Organizations often want to understand what kind of decisions we are making at what kind of levels. For agile transformations, there is a desire to de-centralize decision making as far as possible. An example tag on Features to track this might include:

  • Portfolio: work initiated and prioritized at the program level
  • Program: work initiated and prioritized at the program level

Strategic Theme

Organizations often want to understand how the capacity used maps back to the strategic themes of the organization. For example, you might see tags on Epics that reflect:

  • Automation
  • Strategic theme 2
  • And so on …

Leading Indicators

Organizations often need to understand whether they are heading in the right direction for an initiative well before the customer realizes the value. This often means identifying leading indicators, metrics that we believe means that if they head in the right direction, the outcome to the customer will be realized. Tagging Epics to reflect these indicators could help with calculation.

  • Percentage of deployed epics
  • Percentage of epics associated with key initiative
  • Leading indicator 3
  • And so on …

Capitalization

Organizations typically operate out of two budgets; a capital budget and an operating budget. If it comes out of the capital budget, then we can defer the recognition of these costs until we actually start selling the result of the effort. If it comes out of the operating budget, the costs drop straight to the bottom line affecting your profitability immediately. Tags we could apply to Features could simply be:

  • Capital budget
  • Operating budget

See How Do We Do Software Capitalization When we Go to Agile? for more on this subject.

Funding Source

Organizations often have different sources of funding for an IT organization and it is up to the IT organization to ensure that the capacity allocated to work lines up with these funding sources. To track, tags we could be apply Features could be:

  • Customer Projects
  • Service Projects
  • Base
  • And so on

Kano Model

The Kano Model of Customer Satisfaction classifies needs, capabilities, or product features based on customer’s perception and their effect on customer satisfaction. These classifications are useful for guiding design and investment decisions in that they indicate when good is good enough, and when more is better.

Kano analysis helps people understand how features stack up to support customer satisfaction. Customers (or their proxies) respond to questions related to their needs. This data drives the analysis. Using Kano analysis you can determine whether, for example, a feature is a “must have” or a “differentiator” which would then also indicate the level of investment you might apply to this (if “must have” you might choose to apply minimal investment to get the feature, allowing more investment in something that truly excites the customer - the differentiator.)

Tagging with the Kano model attributes helps us determine whether we are really going after something that excites the customer. Sample tags might include:

  • Must have
  • Differentiator
  • Linear
  • Indifferent
  • Reverse

For more information see Kano Model

And Then There are More Arbitrary Definitions

One product development shop I worked with really wanted us to track truly innovative work separately from maintenance work. Their categories were a combination of a number of notions:

  • Discretionary: General incremental product investments required to address market/customer demand. Also called “new features and/or enhancements”.
  • Innovation: Truly new work. Investment in a feature that is new to the work or to our offering to the industry / market. Something that we may want to consider for patent application or to mark as a trade secret.
  • Contractual: Investment fulfilling contractual commitments to a customer.
  • Platform: Investments supporting upgrades to new operating systems, database versions, web servers, browsers, compilers, and 3rd party components.
  • Maintenance: Planned investment required to address defects found in fielded releases of the product.
  • Technical Debt: Investment required to address issues in quality in the code we have in place today:
  • Release: Activity required to generate software deliverables that is not directly related to development of the deliverable content itself.
  • Overhead: Effectively the “other” bucket.

You can see that this list is a mix of a number of the ideas above. The problem people have with this type of categorization model is that it is often not clear which category an item belongs to, causing confusion. This approach is generally not recommended.

How Do We Track Investments?

There are two general approaches used by agilists everywhere:

  1. Tags: Tag epics, features, or stories with a tag that reflects a category. For example, perhaps we are interested in whether we are investing enough money on “Evaluating” versus “Emerging”. We’d create a (unique) tag that would reflect these values (“Evaluating”, “Emerging”, ..) in our tracking tool. We’d then set up a working agreement within the organization that “all epics need to be tagged with one of these values.” The working agreement will generally specific the lowest level to tag. In the example, above the lowest level is “Epic”. What this means is that we typically would not tag features or stories because, since they are all related to an epic, the category is defined by the epic.
  2. Initiatives: Sometime executives just want to understand how much is being spent on a particular initiative in relationship to others. If we assume that individual initiatives are modeled as Epics, then the simplest approach is just to link associated features and stories to the epic, and track total work.

A combination of approaches is also used in many cases.

No matter the approach, we’d provide tools and dashboards to people to easily find data that was not tagged appropriately so the data can be cleansed.

Some organizations feel like that they need to get very precise with these categories. So for a particular category, for example Capitalization they won’t just label a feature with “Capital budget” or “Operating budget” but rather will try to estimate a percentage of the Feature that fits into each category (eg Feature is 33.3% capital budget, and the rest operating).

In general, I recommend against this approach for a number of reasons:

  • There is a huge overhead to create and maintain this data, which reduces the chance that it will be available for all items
  • The data may feel like it is more precise, but the reality is that it is probably less accurate overall as people typically are guessing at percentages
  • There is usually sufficient accuracy in broad categorization to make the decisions required.
  • If you feel there is value in more precise data, tag at the lower level. If tagging epics isn’t giving you the information you need, tag features. Same for features and stories.

What Units Should We Use to Track Investments?

There are two basic units used by agilists to track proportions of investments:

  1. Count: of Epics, Features, or Stories
  2. Size: of Epic points, Feature points, or Story points

In most cases it doesn’t really make a lot of difference what is used. In general I’ve found that a count approach is both simpler and more consistent over the long term. Most organizations are more comfortable using size because this seems like it is an important factor. The reality is that the “law of large numbers” takes over for large implementations so the counts are “good enough” and probably about the same as using size information.

Can We Trust Agile Data to Make Decisions?

A word on getting started on Agile reporting approach. Initially it is difficult for organizations to understand how you turn points and counts into dollars. See How Do I Convert Points and Velocity to Dollars? for the general approach.

And then many people worry that tracking counts of epics, features, and stories, or epic, feature, and story points are less accurate than traditional time keeping systems. Experience shows that the data from these agile systems is in fact more accurate because, unlike traditional time card systems, the people doing the work actually have an interest (skin in the game) when it comes to count or point data. For more information, see Can We Trust Story Points as a Measure of Effort?.

And finally, I’ve found that from a practical perspective you will have to run time keeping and point systems in parallel for a period of time so that people can get comfortable with the new approach.

Context

Context matters. This page was written out of the following context:

  1. Discussion assumes a large scale implementation of agile where senior executives have hundreds of people on teams and they are trying to ensure good governance.
  2. Some level of “tool” is used to record the information, even if for example this is as simple as free text tag field.
  3. Vocabulary assumes a SAFe structure of Epics → Features → Stories

Want to Know More?

/home/hpsamios/hanssamios.com/dokuwiki/data/attic/how_do_we_know_where_we_are_spending_our_money.1568211391.txt.gz · Last modified: 2020/06/02 14:30 (external edit)