Mar 12, 2014

Practical ITSM Advice: Defining Availability For An IT Service

Whenever I am confused & bored of my work ....i take some time to write about IT, Services, Quality, Methodologies ....etc in my blog…before that ..the only thing that strikes in mind to visit Stephen Mann Blog ...after reading his blog ...i just forget to write and blindly share the content written by Stephen...I hate that...but Stephen always writes in practical way and that’s what I like & admire ...as i am working in Telecom IT Service Provider company...always had this confusion about defining/calculating application availability. Availability SLA target in most of the case defined as 99.999% or 99.955%..... How do we calculate the availability ?? Is it possible to meet the defined target of 99.999% ?? The way which we calculate the Availability SLA is correct ?? Do we consider Planned Downtime while deriving availability SLA's?? Do we consider nodes/regions ?? users impacted ??... and Stuart Rance has answered my most of queries and Stephen has posted this article in his blog......interesting & worth reading.... http://blogs.forrester.com/stephen_mann/13-05-06-practical_itsm_advice_defining_availability_for_an_it_service

Aug 2, 2013

WHERE IT METRICS GO WRONG: 13 ISSUES TO AVOID


In a recent Forrester report — Develop Your Service Management And Automation Balanced Scorecard — I highlight some of the common mistakes made when designing and implementing infrastructure & operations (I&O) metrics. This metric “inappropriateness” is a common issue, but there are still many I&O organizations that don’t realize that they potentially have the wrong set of metrics. So, consider the following:
  1. When it comes to metrics, I&O is not always entirely sure what it’s doing or why. We often create metrics because we feel that we “should” rather than because we have definite reasons to capture and analyze data and consider performance against targets. Ask yourself: “Why do we want or need metrics?” Do your metrics deliver against this? You won’t be alone if they don’t.
  2. Metrics are commonly viewed as an output in their own right. Far too many I&O organizations see metrics as the final output rather than as an input into something else, such as business conversations about services or improvement activity. The metrics become a “corporate game” where all that matters is that you’ve met or exceeded your targets. Metrics reporting should see the bigger picture and drive improvement.
  3. I&O organizations have too many metrics. IT service management (ITSM) tools provide large numbers of reports and metrics, which encourages well-meaning I&O staff to go for quantity over quality. Just because we can measure something doesn’t mean that we should — and even if we should measure it, we don’t always need to report on it. The metrics we choose to disseminate should directly contribute to understanding whether we’ve achieved desired performance and outcomes.
  4. We measure things because they’re easy to measure, not because they’re important.I&O organizations shouldn’t spend more time collecting and reporting metrics than the value we get from them, but that still isn’t an excuse to just measure the easy stuff. The availability of system reports and metrics again comes into play, with little or no effort needed to suck performance-related information out of the ITSM tool or tools. Consider why you report each and every metric in your current reporting pack and assess the value they provide versus the effort required to report them. Not only will you find metrics that you report on just because you can (“they were there already”), you will also find metrics that are “expensive” to provide but provide little or no value (“they seemed like a good idea at the time”).
  5. I&O can easily fall into the trap of focusing on IT rather than business metrics. There is often a disconnect between IT activity and performance and business objectives, demands, and drivers. So consider your existing metrics from a business perspective: What does the fact that there are 4,000 incidents per month actually mean? From an ITSM perspective, it might mean that we’ve been busy or that it’s 10% lower (or higher) than the previous month. But is the business actually interested in incident volumes? If it is, does it interpret that as “you make a lot of mistakes in IT” or as “you’ve prevented the business working 4,000 times this month”?
  6. There is no structure for or context between metrics. Metrics can be stuck in silos rather than being end-to-end. There is also a lack of correlation and context between different metrics. A good example is the excitement over the fact that the cost-per-incident has dropped — but closer inspection of other metrics shows that the cost has gone down not because we’ve become more efficient but because we had more incidents during the reporting period than normal.
  7. We take a one-dimensional view of metrics. Firstly, I&O organizations can limit themselves to looking at performance in monthly silos — they don’t look at the month-on-month, quarter-on-quarter, or even year-on-year trends. So while the I&O organization might hit its targets, there might be a failure just around the corner as performance degrades over time.
  8. The metric hierarchy isn’t clear. Many don’t appreciate that: 1) not all metrics are born equal — there are differences between metrics, key performance indicators (KPIs), critical success factors (CSFs), and strategic objectives; and 2) metrics need to differentiate between a number of factors, such as hierarchy level, recipients, and their ultimate use. Different people will have different uses for different metrics, so one-size reporting definitely doesn’t fit all. As with all reporting, tell people what they need to know, when they need to know it, and in a format that’s easy for them to consume. If your metrics don’t support decision-making, then you’re suffering from one or more of these listed issues.
  9. We place too much emphasis on I&O benchmarks. The ability to compare yourself with other I&O organizations can help show how fantastic your organization is or justify spending on improvements. However, benchmark data is often misleading; one might be comparing apples with oranges. Two great examples are cost-per-incident and incidents handled per-service-desk-agent per-hour. In cost-per-incident, how do you know which costs have been included and which haven’t? The volume, types, and occurrence patterns of incidents will also affect the statistics. The incident profile will also affect incidents handled per-service-desk-agent per-hour statistics.
  10. Metric reporting is poorly designed and delivered. I&O professionals can spend more time collecting metric data than understanding the best way for it to be delivered and consumed — it’s similar to communications per se where a message sent doesn’t always equate to a message received and understood. You can also make metrics and reporting more interesting.
  11. We overlook the behavioral aspects of metrics. At a higher level, we aim for, and then reward, failure — we set targets such as 99.9% availability rather than saying, “We will aim for 100% availability, and we will never go below 99.9%.” At a team or individual level, metrics can drive the wrong behaviors, with particular metrics making individuals act for personal rather than corporate success. Metrics can also conflict and pull I&O staff in different directions. A good example is the tension between two common service desk metrics — average call-handling time and first-contact resolution. Scoring highly against one metric will adversely affect the other, so for I&O professionals to use one in isolation for team or individual performance measurement is potentially dangerous to operations and IT service delivery.
  12. I&O can become blinkered by the existing metrics. When your organization consistently makes its targets, the standard response is to increase the number or scope of targets. But this is not necessarily the right approach. Instead, I&O execs need to consider whether the metrics are still worthwhile — whether they still add value. Sometimes, the right answer is to abolish a particular metric and replace it with one that better reflects your current business needs and any improvement or degradation in performance that you’ve experienced.
  13. Metrics and performance can be easily misunderstood. A good example is incident volumes — a reduction in incident volumes is a good thing, right? Not necessarily. Consider this: A service desk providing a poor level of service might see incident volumes drop as internal customers decide that calling or emailing is futile and start seeking resolution elsewhere or struggling on with workarounds. Conversely, a service desk doing a fantastic job at resolving incidents might see an increase in volumes as more users reach out for help. Thus, I&O leaders need to view customer satisfaction scores in parallel with incident volume metrics to accurately gauge the effectiveness of a service desk.
Finally, consider this: In the wise words of Ivor McFarlane of IBM: “If we use the wrong metrics, do we not get better at the wrong things?”

Top 10 ITSM challenges for 2013 …



The analyst’s view of 2012 
    • ITSM tool inquiries are more popular than ever – people continue to blame their existing ITSM tools for a multitude of sins wherever possible. And you also can’t escape that these inquiries are virtually all related to SaaS (even if the client eventually chooses to go with an on-premise tool).
    • ITSM KPIs and benchmarks are still in high demand, but I continue to see a heavy bias towards operational performance (“what we do”) rather than “what we achieve” in IT.
    • IT asset management (particularly software asset management) has seen strong growth in the latter half of 2012 driven by a need to reduce costs. Interestingly there are now more questions about how to get started than about the differences between different ITAM or SAM tools.
    • Service catalog rose from the ashes of failed service catalog technology projects,  but I continued to see issues with organizations not knowing what their services are or what they actually wanted to accomplish with their service catalog initiative beyond buying a tool.
    But there was also a new breed of inquiry, one that is slowly emerging from the large shadow cast by the enormity of an organization’s IT infrastructure. These are inquiries related to understanding what IT achieves rather than what it does, and they come in many forms:
    • “How can we better serve our customers?”
    • “How can we demonstrate the value we (in IT) deliver?”
    • “How do we evolve into an IT organization that’s fit for 2017 and the changing business needs and expectations?”
    • “How do we improve IT support based on actual business needs?”
    So there is an emerging change in “IT people mindsets.” But don’t get me wrong; there are still many more minds to change (including those of the people that fund IT), and I can’t help but comment on the fact that I see geographical differences similar to what we have traditionally seen with ITIL adoption. Importantly though I am starting to speak with more people who see IT (and ITSM) as the means to an end rather than the end itself.
    And so to the Top 10 ITSM challenges for 2013 …
    … and yes I know I have continued to use “ITSM” here but it is a necessary evil if I want people to read this blog – phrases like “IT service delivery” just don’t sell virtual copy (yet).
    1. IT cost transparency. Something has still got to give in terms of what IT costs — IT is and will continue to be a sizable expense to the business. The IT organization is spending the business’ money, and so the business wants to know whether it is being spent wisely (and who can blame them). How many IT shops know if they are investing the business’ money wisely outside of projects?
    2. Value demonstration. Is IT still just a cost center or has your IT organization been able to translate IT investment into demonstrable business success? I still rather somewhat cheekily say that “if we could demonstrate the business value derived from IT, surely we would be being asked to spend more rather than having to respond to corporately mandated, quick fix, end-of-year budget cuts.”
    3. Agility. The speed of business change continues to dictate a rapid response from IT that many struggle with — as a simple example, yesterday my nephew told me of his five-week wait for a laptop at the bank he recently joined. Not only is it speed and flexibility, it is also “agility of mind.” A change in I&O mindset that asks “why not?” rather than “why?”
    4. Availability. Nothing new here (again). The business needs high quality, highly available IT (or business) services. The difference is in business expectations and available alternatives. For a number of reasons, the business continues to be less forgiving of IT failure and, again, who can blame them.
    5. “Personal hardware.”  End user devices will continue to be a big challenge for IT in 2013. Whether it is the fact that our “internal customers” are unhappy with their “outdated” corporate laptops or the fact that they can’t have corporate iPads or the whole “can of worms” that is BYOD (bring your own device), personal productivity hardware will again be a battleground of business discontent in 2013.
    6. Support and customer service. For me, support is one thing and customer service is another; ideally IT delivers both. That it is ultimately about supporting the consumption of IT services by people rather than just supporting the technology that delivers the IT services. And that service-centricity by frontline IT staff is not enough; it needs to be all IT staff. The same is true for customer-centricity.
    7. Cloud. As cloud adoption continues, are we looking at cloud as a technical or business solution, or both? Do we know enough about the status quo to make informed decisions about moving IT services to the cloud? Probably not; yet for many, cloud is the answer. But I still can’t help think that we haven’t really taken the time to fully understand the question.
    8. Mobility. BYOD comes into play here again, but I think that a bigger issue is at hand — that we are still technology-centric. We all hear talk about MDM (mobile device management) as “THE big issue.” IMO, however, this is old-skool-IT with the device a red herring and of little interest to the customer (unless IT is providing outdated devices). Your customers want (or at least we hope that they continue to want) to access your services any which way they can and need to. Mobility is about people.
    9. Compliance. Whether it’s internal or external regulatory compliance (or governance), most of the above will potentially have a negative knock on to compliance whether it be SOX, software compliance, or meeting internal requirements for “transparency and robustness.” With everything going on elsewhere, it is easy for me to imagine degradation in internal control, not reacting to new risks as a minimum.