Subscriber Discussion

What Info Should A _good_ Service Ticket Contain?

In the past two years there has been an increasing trend among my own clients for requiring that the appropriate Security and IT contacts receive a copy of all security system service tickets. For major deployments such as at airports, this has been a typical requirement for over a decade.

However, outside of critical infrastructure projects, I often see this kind of report:

Service Request:
Video recorder not working

Work Performed in Detail:
Reset video server, adjusted motion detection, confirmed recorded video plays back in client software.

There is no root cause analysis. The customer said "video recorder not working", but of course that's a guess as the actual symptom was, "Recorded video is not available in the client software."

How many problems were actually found? The video server was reset. Motion detection was adjusted. That sounds like two problems.

Why was the motion detection reset? Was it in the VMS or in one or more cameras?

When I called to suggest that this was not really a report "in Detail", the following expanded description was provided:

Re-set video servers, adjusted motion detection, and confirmed video was able to play back on work station up-stairs. Set camera 12 to cont. recording per customer request. Camera 2 was recording to C drive, switched camera to record to video storage drive I that all other cameras were recording too.

So now there are two video servers? Why did the customer want Camera 12 set to continuous recording? This could reflect a change to the risk picture. If it is just for temporary investigation purposes, it may need to be changed back so as not to reduce the retention period. It depends upon how assurance of video retention is currently being achieved (i.e. per-camera setting or just disk space allotment).

It is odd that Camera 2 was set to record to some special location. Service records don't show that change.

Why weren't the server logs and operating system logs checked before or after the reset? Does "reset" the video server mean a soft or a hard reboot? Was the server O/S running or was the server machine completely crashed?

It turned out that there were multiple problems, and certainly more should be known about them.

Without a detailed and accurate service history, how can you distinguish between a series of unrelated problems, and a recurring problem?

The Electronic Security Association (ESA) has a Troubleshooting, Service and Maintenance online course whose first section is called "The Troubleshooting Mindset". When I first got involved in security technology nearly 30 years ago, such a mindset was common among service technicians. Now it seems to be a rarity.

So I'm looking for what might be accepted as "good practice" for information that should be included in a service ticket or service report that goes to the customer.

Hey Ray,

Good scenario. Very sloppy documentation. This statement says everything: "Without a detailed and accurate service history, how can you distinguish between a series of unrelated problems, and a recurring problem?".

But, I see this more as a failure on the operations/svc mgrs part more than on any individual techs part though.... techs are fireman - running around in vans between fires all day. And their bosses are always reminding them to try and battle more fires each day...etc.

If there was even a rudimentary system in place that mandated what information was required to close a 'ticket', then techs would use it - and customers would remain informed.

The integrator has a support entity that tries to troubleshoot stuff over the phone first before they roll a truck, no? Why can't the field tech (who generally has a laptop) just have creds to log into the same tracking system the support person uses to enter details of the onsite trip? Set required fields (at least) and mandate detailed comments. Then, the integrator could answer your questions regarding this service call - and what the heck was going on with Cameras 2 and 12...! :)


1. The 'video server' most likely needed to be 'reset' because some idiot pointed Camera 2 to record to the same local disk (C:) that the OS runs on - which filled it up, causing the OS to lose the ability to do pretty much anything. (I'm fluent in Fieldtech Scribble)

2. Changing Camera 12 to FT Record will do exactly as you described. Needs documentation. Documentation avoids culpability arguments later.

3. 'Adjusting Motion Detection' makes no real sense.... unless the tech assumed (illogically) that cameras weren't recording because the motion detection sensitivity was set too low. (I'm fluent in dealing with illogical techs)

4. Giving the tech the benefit of the doubt, I'm gonna say 'servers' was a typo. :)

I coach our customer service people who take inbound trouble ticket service requests and then process the tickets for billing to try and follow these guidelines for populating the invoice description:

1. Describe what the customer initially requested

2. Describe what the tech found when arrived on site (not usually the same as 1 and what is presented as a problem is in fact a symptom of an underlying problem)

3. Describe the testing, and troubleshooting done to find the problem. What was the actual problem.

4. Describe what repairs were done, parts replaced that fixed the problem and final resolution.

I have found analysis of trouble tickets for underlying causes not to be very fruitful and does not yield any trends to act upon that we don't already know (devices with high failure rate). To train techs to develop trouble shooting skills, we get the techs to contribute to our troubleshooting guide which is populated with a list of unique symptoms as they encounter them. The same symptom can have multiple causes. For the cause, when the problem was eventually fixed, what was the test or procedure to confirm or eliminate that as a cause and if it is that cause, what is needed to repair.

Hope that helps.

Robert, I was pleasantly surprised to read this: "we get the techs to contribute to our troubleshooting guide".

I have significant experience back in the '80s and early '90s helping manufacturers in the industry set up help systems that they would use to capture problem/solution data and quit reinventing the wheel. In one company I found that a problem with the same modem (one provided [resold] by the mfg) had been reported and "solved" independently 25 times in a single month. This was when they had no system in place at all.

Time-to-solution average for customer problems was reduced from 4 days to same day overall in just four weeks once the system was solidly in use. We actually had field tech remember recent support calls and enter them in for a couple of days straight. We normalized descriptions and so on and the result was a night and day difference.

I have never worked with integrators on this kind of thing, but I know such a system (and free open source versions do exist) would be a boon to most service departments.

I second Robert's recommendation of reporting guidelines and troubleshooting guide / symptoms. I think this will help but also feel this is an uphill battle against two more severe problems, specifically:

  • A lot of junior people in technical roles are unfortunately very vague, especially in writing. They, for whatever reason, do not even realize that their 'analysis' of the problem is incredibly superficial and of very little help.
  • Likewise, many lack the underlying conceptual understanding to allow them to recognize things that are out of place. If you do not know how a device is designed to work, it's incredibly hard to figure out what is wrong (especially if it does not fall neatly into a list of common problems).

Seeing these types of reports infuriates me as that type of approach is incredibly inefficient and may never solve the problem (without getting someone else involved to redo it).

I suspect it's an educational system issue as most schools allow students to be sloppy with their word choices but in solving technical problems small details make huge differences (server vs servers, how did something crash, was a hard or soft reboot done, etc.).

So while I think some structure will help compensate (like training wheels), the underlying problems are far deeper.

I was at an ASIS Infrastructure group meeting a few years ago and the subject of change management came up. As an IT professional I proposed that the Security industry should take a closer look at ITIL. This stands for Information Technology Infrastruture Library. It is basically a set of best practices for IT service management. It's a well tructured and mature methodology of performing services. There are many components that could be adopted by the Security industry to standardize on service management.

Vasiles, I am a big fan of not reinventing the wheel. Security technology infrastructure as it exists today in large organizations was not a vision in the security industry 30 years ago. I could not have been, given that even in the IT world such a vision was not common as the technology capabilities were nowhere near what they are today. Even ITIL was a different animal back in the 1980's (before it was ITIL) and in the 1990's.

I think this is an excellent suggestion and I'm going to look into that a little bit.

In what context were you making the suggestion at the ASIS Infrastructure gorup meeting?

Ray, there were questions comming in from integraters to the committee about as-is drawings and change management. I suggested that the ITIL Change Management and Asset Managment domains would be a good start to helping people manage services. By using asset and change management correctly you can realistically pre-plan a service call so that you can get in and out without having to respond to the customer "I was not aware of this, I will have to go back and get more parts". You can also envision a customer calling in and having good records you can aid the customer drill down to the real problem and provide the appropriate support.

I see from your LinkedIn profile that this is indeed one of your areas of expertise. Since the note that have next for you will be off topic for this discussion, I'll contact you directly through the email address in your ASIS Member Directory pofile.

I saw this all the time at the Integrator and IMO it's a failure in Management. Usually service techs are over worked and their paperwork suffers from it. That and a lack of training on how to do it right. Never assume someone automatically knows the best way something should be done, you have to provide the proper training.

At the integrator I came from, a substantial portion of work was performed by subcontracted installers and service techs. One of the conditions to pay an invoice was 'proper documentation' of the work performed.

The method that was used: the techs did not fill out their own summaries. Ever. Rather, a clerk would call them an play the game of 21 questions until 'good' information was fleshed out. No pencil whipping job documentation, just the tech and a clerk (usually this clerk was someone cross trained as a troubleshooter) on the phone. The clerk entered details into the job notes. We figured an extra 30 minutes of labor into a typical service call for this call, and it proved useful in improving job notes.

With a large accumulation of details in a database, the raw data could then be searched by keyword, platform type, location, or a range of dynamic search terms to uncover useful details in future work. Think 'micro-Google'.

"Rather, a clerk would call them an play the game of 21 questions until 'good' information was fleshed out."

That's a pretty clever workaround and might be the best practical option. On the other hand, if the tech cannot get good information on their own, it seems to indicate that they do not really understand the problem themselves.

Brian does describe a workaround that seems practical. I can see even an experienced technician prefering the call approach to having to write concise and accurate notes about the problem and the work done. The service call could be closed out immediately following the call (providing the issue was sufficiently addressed), and in the event the service tech had to get more information to close out the ticket, the tech would still be on site—much better than discovering it after a 1-hour drive back to the office.

It would seem to be more efficient to have an online system to log into, that by form or prompting would ask for the pertinent information. Lots of techs could report at once, whereas the phone call approach would be subject to peak load issues. A drawback might be that the QA on the report may follow immediately, which might still have the tech going off site before all needed information was captured.

The Search aspect would be very valuable. This is an intestesting approach, and could be implemented quickly regardless of the current method of data capture (docs, ticket system, or database application).