Do you know how many tickets does your helpdesk handle regarding application performance? While you probably have some stats about this (or your helpdesk team can gather it for you), the more important question is how many of these cases are diagnosed and fixed is more of a challenge.
This article discusses:
- The challenges related to end-user feedback regarding IT performance?
- The best practices to resolve performance end user complaints faster
- The tools to measure end user performance?
Challenges of handling complaints regarding IT performance
Administrators face difficulties when trying to diagnose performance degradations reported by users to the helpdesk. Common sources are:
What response time is considered fast/satisfactory or slow?
In most organizations, an acceptable / SLA response time is not defined, nor is there a historical view of what the usual / normal response time of an application is.
Few end users actually watch the clock to measure precisely how long they have waited for a screen to show up. If one end user is complaining, does it mean only this one user is having the problem?
What is the threshold of frustration or repetition before a user contacts the helpdesk? If they have given up on it, how will you know?
Lack of reliable facts
Not many users actually note the URL of a page they have waited for too long or take a screenshot of an error message. This lack of information / clear facts about performance degradations, makes it almost impossible for administrators to start a diagnostic.
Performance degradations are very often non-permanent. To troubleshoot them, you need to recreate it and gather enough facts about it to identify the circumstances in which it happens and its possible causes.
Variety of user activities and transactions
An enterprise application can have a multitude of potential transactions, depending on the user’s profile / function and access rights.
Generally, if no clear evidence is collected about performance degradation, then, no proper diagnostic can be made.
In order to efficiently handle network and application performance degradation reports from internal and external users, you need to have processes in place.
- Measure end user performance 365 days a year
- Proactively monitor and keep a historical view of all user activities
- Define and publish SLA response times per application
- Create a detailed problem ticket template so that users fill out with the information needed for proper diagnostics
- Provide easy ways for the IT support to escalate tickets to the appropriate team (network, system, application, DBA)
- Synthetic testing or Robot
Sometimes referred to as End User Experience, this approach utilizes robots to simulate user activities at a given frequency from a set of given locations. This simulation is based on preconfigured scenarios.
The outcome is a continuous measurement of the execution times for the different scenarios from a variety of locations.
- RUM (Real User Monitoring)
Also referred to as Agentless Application Performance Management or Wire Data Performance Analytics, this approach captures all user activities on the network in order to decode them and measure the response times.
Advantages and Drawbacks
Both approaches have advantages and drawbacks:
|Synthetic Data||Wire Data|
|Advantages||+ Proactive approach (no need for real user activity to detect a degradation)||+ Technology agnostic+ Exhaustive view of all user activities+ Non-intrusive (no additional load)+ Low maintenance cost|
|Drawbacks||– High maintenance cost– Limited diagnostic capability (does not tell much about the root cause)– Does not reflect the variety of all the users activities||– Not proactive|