Aug 2, 2013

WHERE IT METRICS GO WRONG: 13 ISSUES TO AVOID


In a recent Forrester report — Develop Your Service Management And Automation Balanced Scorecard — I highlight some of the common mistakes made when designing and implementing infrastructure & operations (I&O) metrics. This metric “inappropriateness” is a common issue, but there are still many I&O organizations that don’t realize that they potentially have the wrong set of metrics. So, consider the following:
  1. When it comes to metrics, I&O is not always entirely sure what it’s doing or why. We often create metrics because we feel that we “should” rather than because we have definite reasons to capture and analyze data and consider performance against targets. Ask yourself: “Why do we want or need metrics?” Do your metrics deliver against this? You won’t be alone if they don’t.
  2. Metrics are commonly viewed as an output in their own right. Far too many I&O organizations see metrics as the final output rather than as an input into something else, such as business conversations about services or improvement activity. The metrics become a “corporate game” where all that matters is that you’ve met or exceeded your targets. Metrics reporting should see the bigger picture and drive improvement.
  3. I&O organizations have too many metrics. IT service management (ITSM) tools provide large numbers of reports and metrics, which encourages well-meaning I&O staff to go for quantity over quality. Just because we can measure something doesn’t mean that we should — and even if we should measure it, we don’t always need to report on it. The metrics we choose to disseminate should directly contribute to understanding whether we’ve achieved desired performance and outcomes.
  4. We measure things because they’re easy to measure, not because they’re important.I&O organizations shouldn’t spend more time collecting and reporting metrics than the value we get from them, but that still isn’t an excuse to just measure the easy stuff. The availability of system reports and metrics again comes into play, with little or no effort needed to suck performance-related information out of the ITSM tool or tools. Consider why you report each and every metric in your current reporting pack and assess the value they provide versus the effort required to report them. Not only will you find metrics that you report on just because you can (“they were there already”), you will also find metrics that are “expensive” to provide but provide little or no value (“they seemed like a good idea at the time”).
  5. I&O can easily fall into the trap of focusing on IT rather than business metrics. There is often a disconnect between IT activity and performance and business objectives, demands, and drivers. So consider your existing metrics from a business perspective: What does the fact that there are 4,000 incidents per month actually mean? From an ITSM perspective, it might mean that we’ve been busy or that it’s 10% lower (or higher) than the previous month. But is the business actually interested in incident volumes? If it is, does it interpret that as “you make a lot of mistakes in IT” or as “you’ve prevented the business working 4,000 times this month”?
  6. There is no structure for or context between metrics. Metrics can be stuck in silos rather than being end-to-end. There is also a lack of correlation and context between different metrics. A good example is the excitement over the fact that the cost-per-incident has dropped — but closer inspection of other metrics shows that the cost has gone down not because we’ve become more efficient but because we had more incidents during the reporting period than normal.
  7. We take a one-dimensional view of metrics. Firstly, I&O organizations can limit themselves to looking at performance in monthly silos — they don’t look at the month-on-month, quarter-on-quarter, or even year-on-year trends. So while the I&O organization might hit its targets, there might be a failure just around the corner as performance degrades over time.
  8. The metric hierarchy isn’t clear. Many don’t appreciate that: 1) not all metrics are born equal — there are differences between metrics, key performance indicators (KPIs), critical success factors (CSFs), and strategic objectives; and 2) metrics need to differentiate between a number of factors, such as hierarchy level, recipients, and their ultimate use. Different people will have different uses for different metrics, so one-size reporting definitely doesn’t fit all. As with all reporting, tell people what they need to know, when they need to know it, and in a format that’s easy for them to consume. If your metrics don’t support decision-making, then you’re suffering from one or more of these listed issues.
  9. We place too much emphasis on I&O benchmarks. The ability to compare yourself with other I&O organizations can help show how fantastic your organization is or justify spending on improvements. However, benchmark data is often misleading; one might be comparing apples with oranges. Two great examples are cost-per-incident and incidents handled per-service-desk-agent per-hour. In cost-per-incident, how do you know which costs have been included and which haven’t? The volume, types, and occurrence patterns of incidents will also affect the statistics. The incident profile will also affect incidents handled per-service-desk-agent per-hour statistics.
  10. Metric reporting is poorly designed and delivered. I&O professionals can spend more time collecting metric data than understanding the best way for it to be delivered and consumed — it’s similar to communications per se where a message sent doesn’t always equate to a message received and understood. You can also make metrics and reporting more interesting.
  11. We overlook the behavioral aspects of metrics. At a higher level, we aim for, and then reward, failure — we set targets such as 99.9% availability rather than saying, “We will aim for 100% availability, and we will never go below 99.9%.” At a team or individual level, metrics can drive the wrong behaviors, with particular metrics making individuals act for personal rather than corporate success. Metrics can also conflict and pull I&O staff in different directions. A good example is the tension between two common service desk metrics — average call-handling time and first-contact resolution. Scoring highly against one metric will adversely affect the other, so for I&O professionals to use one in isolation for team or individual performance measurement is potentially dangerous to operations and IT service delivery.
  12. I&O can become blinkered by the existing metrics. When your organization consistently makes its targets, the standard response is to increase the number or scope of targets. But this is not necessarily the right approach. Instead, I&O execs need to consider whether the metrics are still worthwhile — whether they still add value. Sometimes, the right answer is to abolish a particular metric and replace it with one that better reflects your current business needs and any improvement or degradation in performance that you’ve experienced.
  13. Metrics and performance can be easily misunderstood. A good example is incident volumes — a reduction in incident volumes is a good thing, right? Not necessarily. Consider this: A service desk providing a poor level of service might see incident volumes drop as internal customers decide that calling or emailing is futile and start seeking resolution elsewhere or struggling on with workarounds. Conversely, a service desk doing a fantastic job at resolving incidents might see an increase in volumes as more users reach out for help. Thus, I&O leaders need to view customer satisfaction scores in parallel with incident volume metrics to accurately gauge the effectiveness of a service desk.
Finally, consider this: In the wise words of Ivor McFarlane of IBM: “If we use the wrong metrics, do we not get better at the wrong things?”

Top 10 ITSM challenges for 2013 …



The analyst’s view of 2012 
    • ITSM tool inquiries are more popular than ever – people continue to blame their existing ITSM tools for a multitude of sins wherever possible. And you also can’t escape that these inquiries are virtually all related to SaaS (even if the client eventually chooses to go with an on-premise tool).
    • ITSM KPIs and benchmarks are still in high demand, but I continue to see a heavy bias towards operational performance (“what we do”) rather than “what we achieve” in IT.
    • IT asset management (particularly software asset management) has seen strong growth in the latter half of 2012 driven by a need to reduce costs. Interestingly there are now more questions about how to get started than about the differences between different ITAM or SAM tools.
    • Service catalog rose from the ashes of failed service catalog technology projects,  but I continued to see issues with organizations not knowing what their services are or what they actually wanted to accomplish with their service catalog initiative beyond buying a tool.
    But there was also a new breed of inquiry, one that is slowly emerging from the large shadow cast by the enormity of an organization’s IT infrastructure. These are inquiries related to understanding what IT achieves rather than what it does, and they come in many forms:
    • “How can we better serve our customers?”
    • “How can we demonstrate the value we (in IT) deliver?”
    • “How do we evolve into an IT organization that’s fit for 2017 and the changing business needs and expectations?”
    • “How do we improve IT support based on actual business needs?”
    So there is an emerging change in “IT people mindsets.” But don’t get me wrong; there are still many more minds to change (including those of the people that fund IT), and I can’t help but comment on the fact that I see geographical differences similar to what we have traditionally seen with ITIL adoption. Importantly though I am starting to speak with more people who see IT (and ITSM) as the means to an end rather than the end itself.
    And so to the Top 10 ITSM challenges for 2013 …
    … and yes I know I have continued to use “ITSM” here but it is a necessary evil if I want people to read this blog – phrases like “IT service delivery” just don’t sell virtual copy (yet).
    1. IT cost transparency. Something has still got to give in terms of what IT costs — IT is and will continue to be a sizable expense to the business. The IT organization is spending the business’ money, and so the business wants to know whether it is being spent wisely (and who can blame them). How many IT shops know if they are investing the business’ money wisely outside of projects?
    2. Value demonstration. Is IT still just a cost center or has your IT organization been able to translate IT investment into demonstrable business success? I still rather somewhat cheekily say that “if we could demonstrate the business value derived from IT, surely we would be being asked to spend more rather than having to respond to corporately mandated, quick fix, end-of-year budget cuts.”
    3. Agility. The speed of business change continues to dictate a rapid response from IT that many struggle with — as a simple example, yesterday my nephew told me of his five-week wait for a laptop at the bank he recently joined. Not only is it speed and flexibility, it is also “agility of mind.” A change in I&O mindset that asks “why not?” rather than “why?”
    4. Availability. Nothing new here (again). The business needs high quality, highly available IT (or business) services. The difference is in business expectations and available alternatives. For a number of reasons, the business continues to be less forgiving of IT failure and, again, who can blame them.
    5. “Personal hardware.”  End user devices will continue to be a big challenge for IT in 2013. Whether it is the fact that our “internal customers” are unhappy with their “outdated” corporate laptops or the fact that they can’t have corporate iPads or the whole “can of worms” that is BYOD (bring your own device), personal productivity hardware will again be a battleground of business discontent in 2013.
    6. Support and customer service. For me, support is one thing and customer service is another; ideally IT delivers both. That it is ultimately about supporting the consumption of IT services by people rather than just supporting the technology that delivers the IT services. And that service-centricity by frontline IT staff is not enough; it needs to be all IT staff. The same is true for customer-centricity.
    7. Cloud. As cloud adoption continues, are we looking at cloud as a technical or business solution, or both? Do we know enough about the status quo to make informed decisions about moving IT services to the cloud? Probably not; yet for many, cloud is the answer. But I still can’t help think that we haven’t really taken the time to fully understand the question.
    8. Mobility. BYOD comes into play here again, but I think that a bigger issue is at hand — that we are still technology-centric. We all hear talk about MDM (mobile device management) as “THE big issue.” IMO, however, this is old-skool-IT with the device a red herring and of little interest to the customer (unless IT is providing outdated devices). Your customers want (or at least we hope that they continue to want) to access your services any which way they can and need to. Mobility is about people.
    9. Compliance. Whether it’s internal or external regulatory compliance (or governance), most of the above will potentially have a negative knock on to compliance whether it be SOX, software compliance, or meeting internal requirements for “transparency and robustness.” With everything going on elsewhere, it is easy for me to imagine degradation in internal control, not reacting to new risks as a minimum.

Aug 1, 2013

Lean and Six Sigma – Are they related!!!!


Previous month, we were undergoing Six Sigma Training and many members like me had different opinion on the difference between Six Sigma & Lean and we requested trainer to explain the difference between two methodologies/tools.

After the discussions, I really wanted to come up with my own idea on how the differences can be articulated and referred few blogs/sites (Obvious that it has been defined and articulated by many guru’s….Nothing to re-invent J)

Thought, Lets articulate (Of course not more than one page!!!) in my perception and also might help my team members & share my thoughts to broader audience J

Let’s get on to the business… Hopefully this information will be useful for a number of people.
Simple words - Lean is an overall philosophy for continuous improvement which is based on the Toyota Business System.

The six sigma methodology was developed by individuals at Motorola with some outside help.
Lean is about delivering value in the eyes of the customer through the continuous efforts to eliminate waste (Any 8 wastes – Over Production, Waiting, Transport, Poor Process Design, Inventory, Motion, Defects, Under Resource Utilization & Skill).

At this stage, I am sure my team will come back to me with - “Chetan, How wastes can be related/interpreted with respect to Software Industry”  

Lean focuses on continuous improvement while demonstrating respect for people. Lean is also a mindset and an enabling strategy which helps organizations to effectively implement business strategies and initiatives to achieve overall objectives.

Lean is about getting the entire organization to make improvements on a daily basis – Its Cultural Change.

I feel that if lean is implemented correctly - Implemented Correctly - Is this a right word --- YES

Lean Implementer's/Lean Leaders should ask right questions in helping the people who they work for, to learn and discover the right solutions to the problems they are trying to solve. Lean requires employee/member engagement, continuous learning and employee empowerment.

Six sigma is a statistical methodology for improvement and focuses on eliminating variation.
Six sigma and lean both utilize the scientific method to solve problems.
Six sigma is primarily a project based and top down approach.

Belts are trained to lead projects and When i discuss with my domain friends on six sigma; friends advise me; often in many organization we end up in doing things to people versus people.

There are many people who view the belt approach as elitist when applied broadly. It does not promote truly effective employee engagement at all levels of the organization.

A large number of companies have found that they have struggled in successfully sustaining the momentum in their six sigma deployments.  My observation is six sigma tends to be a push approach.
In my view what we should be aiming to create is pull for continuous improvement throughout the organization.

Six sigma, from my standpoint/perspective, is a methodology that compliments and enhances lean.

Many of my friends who implemented Six Sigma and now started using lean sigma approach have told me they wish they had started with lean and enhanced their deployment with six sigma.

I feel it makes more sense to eliminate as much waste as possible upfront and then focuses on eliminating variation.

As read somewhere, Toyota has always used statistical methods in the appropriate situations. Currently the trend is a major emphasis on lean implementations and a decrease in new six sigma deployments.

I am sure that many people have perspectives that are different from mine. Open to suggestions and we can discuss….