- When it comes to metrics, I&O is not always entirely sure what it’s doing or why. We often create metrics because we feel that we “should” rather than because we have definite reasons to capture and analyze data and consider performance against targets. Ask yourself: “Why do we want or need metrics?” Do your metrics deliver against this? You won’t be alone if they don’t.
- Metrics are commonly viewed as an output in their own right. Far too many I&O organizations see metrics as the final output rather than as an input into something else, such as business conversations about services or improvement activity. The metrics become a “corporate game” where all that matters is that you’ve met or exceeded your targets. Metrics reporting should see the bigger picture and drive improvement.
- I&O organizations have too many metrics. IT service management (ITSM) tools provide large numbers of reports and metrics, which encourages well-meaning I&O staff to go for quantity over quality. Just because we can measure something doesn’t mean that we should — and even if we should measure it, we don’t always need to report on it. The metrics we choose to disseminate should directly contribute to understanding whether we’ve achieved desired performance and outcomes.
- We measure things because they’re easy to measure, not because they’re important.I&O organizations shouldn’t spend more time collecting and reporting metrics than the value we get from them, but that still isn’t an excuse to just measure the easy stuff. The availability of system reports and metrics again comes into play, with little or no effort needed to suck performance-related information out of the ITSM tool or tools. Consider why you report each and every metric in your current reporting pack and assess the value they provide versus the effort required to report them. Not only will you find metrics that you report on just because you can (“they were there already”), you will also find metrics that are “expensive” to provide but provide little or no value (“they seemed like a good idea at the time”).
- I&O can easily fall into the trap of focusing on IT rather than business metrics. There is often a disconnect between IT activity and performance and business objectives, demands, and drivers. So consider your existing metrics from a business perspective: What does the fact that there are 4,000 incidents per month actually mean? From an ITSM perspective, it might mean that we’ve been busy or that it’s 10% lower (or higher) than the previous month. But is the business actually interested in incident volumes? If it is, does it interpret that as “you make a lot of mistakes in IT” or as “you’ve prevented the business working 4,000 times this month”?
- There is no structure for or context between metrics. Metrics can be stuck in silos rather than being end-to-end. There is also a lack of correlation and context between different metrics. A good example is the excitement over the fact that the cost-per-incident has dropped — but closer inspection of other metrics shows that the cost has gone down not because we’ve become more efficient but because we had more incidents during the reporting period than normal.
- We take a one-dimensional view of metrics. Firstly, I&O organizations can limit themselves to looking at performance in monthly silos — they don’t look at the month-on-month, quarter-on-quarter, or even year-on-year trends. So while the I&O organization might hit its targets, there might be a failure just around the corner as performance degrades over time.
- The metric hierarchy isn’t clear. Many don’t appreciate that: 1) not all metrics are born equal — there are differences between metrics, key performance indicators (KPIs), critical success factors (CSFs), and strategic objectives; and 2) metrics need to differentiate between a number of factors, such as hierarchy level, recipients, and their ultimate use. Different people will have different uses for different metrics, so one-size reporting definitely doesn’t fit all. As with all reporting, tell people what they need to know, when they need to know it, and in a format that’s easy for them to consume. If your metrics don’t support decision-making, then you’re suffering from one or more of these listed issues.
- We place too much emphasis on I&O benchmarks. The ability to compare yourself with other I&O organizations can help show how fantastic your organization is or justify spending on improvements. However, benchmark data is often misleading; one might be comparing apples with oranges. Two great examples are cost-per-incident and incidents handled per-service-desk-agent per-hour. In cost-per-incident, how do you know which costs have been included and which haven’t? The volume, types, and occurrence patterns of incidents will also affect the statistics. The incident profile will also affect incidents handled per-service-desk-agent per-hour statistics.
- Metric reporting is poorly designed and delivered. I&O professionals can spend more time collecting metric data than understanding the best way for it to be delivered and consumed — it’s similar to communications per se where a message sent doesn’t always equate to a message received and understood. You can also make metrics and reporting more interesting.
- We overlook the behavioral aspects of metrics. At a higher level, we aim for, and then reward, failure — we set targets such as 99.9% availability rather than saying, “We will aim for 100% availability, and we will never go below 99.9%.” At a team or individual level, metrics can drive the wrong behaviors, with particular metrics making individuals act for personal rather than corporate success. Metrics can also conflict and pull I&O staff in different directions. A good example is the tension between two common service desk metrics — average call-handling time and first-contact resolution. Scoring highly against one metric will adversely affect the other, so for I&O professionals to use one in isolation for team or individual performance measurement is potentially dangerous to operations and IT service delivery.
- I&O can become blinkered by the existing metrics. When your organization consistently makes its targets, the standard response is to increase the number or scope of targets. But this is not necessarily the right approach. Instead, I&O execs need to consider whether the metrics are still worthwhile — whether they still add value. Sometimes, the right answer is to abolish a particular metric and replace it with one that better reflects your current business needs and any improvement or degradation in performance that you’ve experienced.
- Metrics and performance can be easily misunderstood. A good example is incident volumes — a reduction in incident volumes is a good thing, right? Not necessarily. Consider this: A service desk providing a poor level of service might see incident volumes drop as internal customers decide that calling or emailing is futile and start seeking resolution elsewhere or struggling on with workarounds. Conversely, a service desk doing a fantastic job at resolving incidents might see an increase in volumes as more users reach out for help. Thus, I&O leaders need to view customer satisfaction scores in parallel with incident volume metrics to accurately gauge the effectiveness of a service desk.
Aug 2, 2013
WHERE IT METRICS GO WRONG: 13 ISSUES TO AVOID
Top 10 ITSM challenges for 2013 …
- ITSM tool inquiries are more popular than ever – people continue to blame their existing ITSM tools for a multitude of sins wherever possible. And you also can’t escape that these inquiries are virtually all related to SaaS (even if the client eventually chooses to go with an on-premise tool).
- ITSM KPIs and benchmarks are still in high demand, but I continue to see a heavy bias towards operational performance (“what we do”) rather than “what we achieve” in IT.
- IT asset management (particularly software asset management) has seen strong growth in the latter half of 2012 driven by a need to reduce costs. Interestingly there are now more questions about how to get started than about the differences between different ITAM or SAM tools.
- Service catalog rose from the ashes of failed service catalog technology projects, but I continued to see issues with organizations not knowing what their services are or what they actually wanted to accomplish with their service catalog initiative beyond buying a tool.
But there was also a new breed of inquiry, one that is slowly emerging from the large shadow cast by the enormity of an organization’s IT infrastructure. These are inquiries related to understanding what IT achieves rather than what it does, and they come in many forms:- “How can we better serve our customers?”
- “How can we demonstrate the value we (in IT) deliver?”
- “How do we evolve into an IT organization that’s fit for 2017 and the changing business needs and expectations?”
- “How do we improve IT support based on actual business needs?”
So there is an emerging change in “IT people mindsets.” But don’t get me wrong; there are still many more minds to change (including those of the people that fund IT), and I can’t help but comment on the fact that I see geographical differences similar to what we have traditionally seen with ITIL adoption. Importantly though I am starting to speak with more people who see IT (and ITSM) as the means to an end rather than the end itself.And so to the Top 10 ITSM challenges for 2013 …… and yes I know I have continued to use “ITSM” here but it is a necessary evil if I want people to read this blog – phrases like “IT service delivery” just don’t sell virtual copy (yet).- IT cost transparency. Something has still got to give in terms of what IT costs — IT is and will continue to be a sizable expense to the business. The IT organization is spending the business’ money, and so the business wants to know whether it is being spent wisely (and who can blame them). How many IT shops know if they are investing the business’ money wisely outside of projects?
- Value demonstration. Is IT still just a cost center or has your IT organization been able to translate IT investment into demonstrable business success? I still rather somewhat cheekily say that “if we could demonstrate the business value derived from IT, surely we would be being asked to spend more rather than having to respond to corporately mandated, quick fix, end-of-year budget cuts.”
- Agility. The speed of business change continues to dictate a rapid response from IT that many struggle with — as a simple example, yesterday my nephew told me of his five-week wait for a laptop at the bank he recently joined. Not only is it speed and flexibility, it is also “agility of mind.” A change in I&O mindset that asks “why not?” rather than “why?”
- Availability. Nothing new here (again). The business needs high quality, highly available IT (or business) services. The difference is in business expectations and available alternatives. For a number of reasons, the business continues to be less forgiving of IT failure and, again, who can blame them.
- “Personal hardware.” End user devices will continue to be a big challenge for IT in 2013. Whether it is the fact that our “internal customers” are unhappy with their “outdated” corporate laptops or the fact that they can’t have corporate iPads or the whole “can of worms” that is BYOD (bring your own device), personal productivity hardware will again be a battleground of business discontent in 2013.
- Support and customer service. For me, support is one thing and customer service is another; ideally IT delivers both. That it is ultimately about supporting the consumption of IT services by people rather than just supporting the technology that delivers the IT services. And that service-centricity by frontline IT staff is not enough; it needs to be all IT staff. The same is true for customer-centricity.
- Cloud. As cloud adoption continues, are we looking at cloud as a technical or business solution, or both? Do we know enough about the status quo to make informed decisions about moving IT services to the cloud? Probably not; yet for many, cloud is the answer. But I still can’t help think that we haven’t really taken the time to fully understand the question.
- Mobility. BYOD comes into play here again, but I think that a bigger issue is at hand — that we are still technology-centric. We all hear talk about MDM (mobile device management) as “THE big issue.” IMO, however, this is old-skool-IT with the device a red herring and of little interest to the customer (unless IT is providing outdated devices). Your customers want (or at least we hope that they continue to want) to access your services any which way they can and need to. Mobility is about people.
- Compliance. Whether it’s internal or external regulatory compliance (or governance), most of the above will potentially have a negative knock on to compliance whether it be SOX, software compliance, or meeting internal requirements for “transparency and robustness.” With everything going on elsewhere, it is easy for me to imagine degradation in internal control, not reacting to new risks as a minimum.
Aug 1, 2013
Lean and Six Sigma – Are they related!!!!
Jul 30, 2013
The waiting is over… Updates announced for ISO 9001 & ISO 14001
The last update for the ISO 9001 was 2008, ISO 9001:2008 basically re-narrates ISO 9001:2000. The 2008 version only introduced clarifications to the existing requirements of ISO 9001:2000 and some changes intended to improve consistency with ISO 14001:2004.
The ISO 14001 was last updated in 2004, the revision of the standard is in it’s early stages and the earliest date expected for the final version is January 2015. In total, there are 25 recommendations in consideration, for the new revision of ISO 14001.
Since the standard was first published in 1996, ISO 14001:2004, Environmental management systems – Requirements with guidance for use, has been adopted by over 250 000 certified, with the users in 155 countries (worldwide).
*Source
http://www.iso.org/iso/home/news_index/news_archive/news.htm?refid=Ref1547
http://www.isoqsltd.com/waiting-over-updates-announced-iso-9001-iso-14001/
Aug 17, 2010
General Comparison and Changes between ITIL V3 and ITIL V2
Most importantly, a detailed comparison between ITIL V3 and V2 reveals that all the main processes known from ITIL V2 are still there, with only few substantial changes. In many instances, however, ITIL V3 offers revised and enhanced process descriptions.
New ITIL Structure: The ITIL V3 Service Lifecycle
The main difference between ITIL V3 and V2 is the new ITIL V3 Service Lifecycle structure: ITIL V3 is best understood as seeking to implement feedback-loops by arranging processes in a circular way.
This means the old structure of Service Support and Service Delivery was replaced by a new one consisting of the five ITIL V3 core disciplines:
- Service Strategy determines which types of services should be offered to which customers or markets
- Service Design identifies service requirements and devises new service offerings as well as changes and improvements to existing ones
- Service Transition builds and deploys new or modified services
- Service Operation carries out operational tasks
- Continual Service Improvement Improvement learns from past successes and failures and continually improves the effectiveness and efficiency of services and processes.
ITIL V3 complements the processes known from ITIL V2 with a number of new processes and puts more emphasis on producing value for the business.
Modifications to Process Interfaces
Due to the new Service Lifecycle structure, all interfaces between the ITIL processes were changed in order to reflect the new ITIL V3 process structure; so even if processes in ITIL V3 and V2 are broadly identical, their interfaces have changed. Example: The Incident Management process must now link to the Service Design processes, although a comparison between Incident Management in ITIL V2 and V3 reveals that the process itself did not change substantially.
Jul 26, 2010
ITIL - The CMDB - The central IT repository
conceptual IT model, which is indispensable for efficient IT service management. All IT components and inventories are managed in the CMDB.
Configuration management exceeds asset management, often incorrectly used as a synonym, as it does not only document assets from a financial point of view, but captures information regarding the relationship between components, specifications or their location.
Thus IT support can quickly access information on the interdependence of IT services and the IT components (= configuration items = CIs) necessary for them.
According to ITIL, a CMDB must feature the following functionalities:
• manual and, where applicable, automatic recording and modification of configuration items
• description of the relationship and/or interdependence between CIs
• change of CI attributes (e.g. serial numbers)
• location and user management for CIs
• integration via the ITIL processes represented in the system
Dec 26, 2008
What has changed in ISO 9001:2008?
Here's a summary of the changes to ISO 9001 in 2008.
- There are no new requirements in this version. This is significant, because a 'requirement' is something you must do.
- All the changes are minor. They consist of changes of wording: clarifications and modifications to words or phrases and a few extra notes or examples (see below). Most are small additions or changes, with a few deletions.
The intent is to make the meaning of a requirement clearer, to improve compatibility with ISO 14001 and/or assist translation to other languages. - The new version was published in mid-November 2008.
- From that date, for a new certification you can choose certification to the 2000 or 2008 version.
- After one year (from Nov 2008) all recertifications/new certifications will be to the 2008 version. Two years later, certifications to the 2000 version will no longer be valid.
Some examples of the changes, with clause numbers are:
4.2.1: modified to clarify that a single document can cover requirements for one or more procedures (eg, combine requirements for corrective action & preventive action into 1 procedure, or cover 'nonconformity' within another procedure rather than having a separate one)
6.4 Now clarifies what 'work environment' includes, and gives examples such as noise, temperature, humidity
8.2.1 Note added with some ideas on how customer satisfaction can be measured.
General advice: Get a copy of the new version, study the changes, and think about how (or if) the changes affect your system. Note, for example, the advice that you can address multiple requirements in a single procedure (we already have). Notice how the Standard has made it very clear that if you subcontract any part of your service or product out, this doesn't in any way remove or alter your responsibility to meet your customers' requirements.
ISO publishes new edition of ISO 9001 quality management system standard
ISO 9001:2008, Quality management system – Requirements, is the fourth edition of the standard first published in 1987 and which has become the global benchmark for providing assurance about the ability to satisfy quality requirements and to enhance customer satisfaction in supplier-customer relationships.
ISO 9001:2008 contains no new requirements compared to the 2000 edition, which it replaces. It provides clarifications to the existing requirements of ISO 9001:2000 based on eight years’ experience of implementing the standard worldwide and introduces changes intended to improve consistency with the environmental management system standard, ISO 14001:2004.
All ISO standards – currently more than 17 400 – are periodically reviewed. Several factors combine to render a standard out of date, such as technological evolution, new methods and materials, new quality and safety requirements, or questions of interpretation and application. To take account of such factors and to ensure that ISO standards are maintained at the state of the art, ISO has a rule requiring them to be periodically reviewed and a decision taken to confirm, withdraw or revise the documents.
ISO/TC 176, which is responsible for the ISO 9000 family, unites expertise from 80 participating countries and 19 international or regional organizations, plus other technical committees. The review of ISO 9001 resulting in the 2008 edition was carried out by subcommittee SC 2 of ISO/TC 176.
This review has benefited from a number of inputs, including the following: a justification study against the criteria of ISO Guide 72:2001, Guidelines for the justification and development of management system standards; feedback from the ISO/TC 176 interpretations process; a two-year systematic review of ISO 9001:2000 within ISO/TC 176/SC2; a worldwide user survey carried out by ISO/TC 176/SC 2, and further data from national surveys.
ISO Secretary-General Alan Bryden commented: “The revised ISO 9001 results from a structured process giving weight to the needs of users and to the likely impacts and benefits of the revisions. ISO 9001:2008 is therefore the outcome of a rigorous examination confirming its fitness for use as the international benchmark for quality management.”
ISO/TC 176/SC 2 has also developed an introduction and support package of documents explaining what the differences are between ISO 9001:2008 and the year 2000 version, why and what they mean for users. These documents are available on the ISO Web site.
Although certification of conformity to ISO 9001 is not a requirement of the standard, it is frequently used in both public and private sectors to increase confidence in the products and services provided by certified organizations, between partners in business-to-business relations, in the selection of suppliers in supply chains and in the right to tender for procurement contracts. Up to the end of December 2007, at least 951 486 ISO 9001:2000 certificates had been issued in 175 countries and economies.
ISO (which does not itself carry out certification) and the International Accreditation Forum (IAF) have agreed on an implementation plan to ensure a smooth transition of accredited certification to ISO 9001:2008. The details of the plan are given in a joint communiqué by the two organizations which is available on the ISO Web site.
ISO 9001:2008, Quality management system – Requirements, costs 114 Swiss francs and is available from ISO national member institutes (see the complete list with contact details) and from ISO Central Secretariat through the ISO Store or by contacting the Marketing & Communication department.