Member Discussion

How To Make IP Cameras Work Better With VMS Software, From A Developer

I have written 15 or so VMS drivers for different camera manufacturers and so I see cameras from a different perspective to installers, manufactures, managers etc. Some cameras work well with VMSs, most are less than perfect, but the Arecont was the worst...

One thing I notice is that the cameras I like and do not like and pretty much the same as the cameras the installers like. I really like Axis, I do not like Arecont. Therefore, I am thinking that if Arecont changes things I don't like about their cameras, their problems might be solved. I am wondering how many  problems with the cameras are that they just do not work well with third party VMSs?

So from a VMS developers perspective, this is what I think Arecont should do:

Hire a good Windows C++ developer and write an application designed to test each camera model to the limits that I think it needs to handle in order to survive out in the field without doing annoying things that cause the NVRs to keep loosing the connection etc.

The application should do the following:

1) Connect to the camera using the following protocols using open source rtsp clients available on the web e.g. live555, and as many others that you can find as these are generally the ones NVR use, although not always. The application should then be able to connect to the cameras using the following protocols.

  1.  tcp over rtsp (mandatory)
  2.  udp over rtsp (optional)
  3.  rtsp tunnerled through http (mandatory)
  4.  MJPEG over HTTP (mandatory)

IMO, Areconts proprietary protocol shouldn't be needed these days.

2) For the H264 decoders use:

  1. Intel IPP H.264 decoders
  2. Intel media SDK hardware accelerated decoders
  3. Optionally NVIDIAs GPU base decoders

In my experience the h264 emitted from at least some Arecont cameras cause problems with all three. VMS companies generally do not write their own decoders, they are most likely to use the Intel decoders.

3) Test all decoders on the full range of Intel CPUs, from Celerons to Cannonlake. The Intel decoders load different code depending on the CPU type. I have seen h264 emitted from Arecont cameras crash the Intel decoders on Pentiums (really really annoying). On generation 3 i7s it causes the decoders to throw memory exceptions that are at least caught. I do not think they crash in this case, but you still loose frames.

4) Test using different combinations of h264 configuration parameters. e.g. the crash I mentioned in (3) was somewhat eliminated if I force the cameras to use constant bitrate h.264 instead of variable when the VMS connects.

5) At an absolute minimum, simultaneously create a tcp connection for each imager and each stream. So if it is a x4 imager and supports dual streaming there will be 8 connections. Do this while having at least 2 web pages open, and make sure the cameras can handle this without dropping connection and crashing. This is at a minimum. The Arecont 8185DN that I have can't handle this, it reboots, looses connections and crashes all over the place. Might be stable for 2 minutes, and then drops connections again. By comparison, some Axis cameras can handle 20 concurrent tcp connections for example. 

6) Test this on a network, or simulated network that is noisy, has large packet loss etc, and at the same time send corrupt data to the cameras, drop connections unexpectedly etc. 

7) when the test application sends tcp packets to the camera, break them up with small delays. e.g. if the camera is expecting 100 bytes, send 10 and wait a bit, send 70 and wait a bit then send the remaining 20. When receiving a reply from the camera do the same. Call the socket receive call and retrieve 10 bytes and wait a bit, then get the rest...

8) Insert random timing delays between receiving frames, and parts of frames from the camera. This puts memory pressure on the camera, many don't handle this well. Make sure in such cases that the camera can throttle down, e.g. drop p frames until the next I frame etc, drop the quality etc. I have seen even some axis cameras slow down and freeze under this test.

9) Create and drop 1000s of connection to the cameras over a period of time to test for memory leaks. There are undoubtly cameras out there (not talking about arecont in this case) that will slow down until they eventually freeze or reboot. 

9) Run the application for 1 week and make sure there are no reboots at all, no memory leaks and an acceptable number of unaccountable lost connections. 

If the camera can not handle this test, keep improving the software, and if necessary the hardware (i.e. processing power and RAM) until it does work. 

NOTICE: This comment was moved from an existing discussion: Arecont Multi Imagers Dropping Communication To Servers


#1, thanks! I've made this its own discussion.

Question - do you think there is a relationship with system on a chip choice? For example, do cameras that use Ambarella chips benefit from that vs Arecont' home grown approach or?

do you think there is a relationship with system on a chip choice?

There must surely be. The thing is though, as a VMS developer, I shouldn't have to be concerned with the internals of the camera, all I should have to be concerned with is the protocol that documents the interface to the camera. Separation of interface and implementation is an important concept in software engineering. The type of processor in the camera is an implementation detail that I should not have to worry about. If a camera uses a slow processor, that is OK, but the limitations should be designed into the interface, For example, if a camera can't handle X number of connections, it should return an error saying "too many connections" when I try to connect with X + 1,  or throttle everything down, as opposed to say just crashing, or dropping existing connections.

My original post is a bit one sided because it is throwing all the responsibility on to the camera side. Likewise I could write a post on things VMS developers can do to make it easier for the camera. The point is, when both sides reach out and do more than what they are arguably required to do, then you get good a topnotch integration. 

Taking some advice from the greatest collaboration in musical theater history, the entire post could be summarized in just three words :-)...

Alphonse and Gaston

In other words, the camera guy says to the VMS guy:

We'll do it your way...

The VMS guy says to the Camera guy:

No, we'll do it your way...

So you have:

1) the Camera

2) the VMS

3) the interaction between the Camera and the VMS.

Axis understands (3) well. If it is not well understood, by either than camera or the VMS manufacturer (developers and managers), your product may appear to be crap to the installers and end users. For example, a camera may appear to repeatedly fail from the point of view of users and integrators, even though as far as the camera manufacturer is concerned it is working as designed when tested in isolation. The real problem might be it just doesn't play well with the VMS, or vice versa. Now, one could take the attitude of blaming the VMS (or vice versa) but that is fatal, and I have been learning this the hard way myself, as I rework a lot of me VMS code to be more accommodating to the cameras.

Have you ever worked with Hikvision Cameras and if so where do that rate in your opinion?

From an integration perspective, I rate modern Hikvision cameras quite highly but not as good as Axis. That being said, I have only been in the VMS industry for about 4 years, and I am not so sure about much older Hikvision cameras. 

The problem I have with the Hikvision cameras, is retrieving motion and other events (e.g. LPR) is a bit more annoying. From memory, if connected via rtsp over tcp, you have to form an addition HTTP connection to get motion events which makes it more brittle than it needs to be (Sony does that to). Or you have to go through their SDK (to get LPR events for example). Not a major, but I find it a bit annoying. Axis, on the other hand will send edge based motion events via the SEI packets in the h264 video/audio stream, so you only have a single connection to manage.

 

Thank you for your candid approach and what seems to be a honest opinion. Normally mentioning Hikvision instigates negative responses without technical merit usually base on government ownership which is a standard practice in China.

I see the cameras from a somewhat polarized perspective, different to that of an installer. If I am writing a driver, I am not concerned with the mechanical aspects of the camera, or image quality, and certainly the politics is the least of my concerns. I may notice these things but it is not relevant to my job. What I want from the camera companies is:

1) easy to understand protocol document, that doesn't have lots of exceptions per camera model. That is, you implement the protocol for a PTZ, fisheye, multi-imager and fixed, and it basically works with all their models. Again, Axis and Hikvision are pretty good in this regard, Sony IMO is not so good. We're a small company, we can't be expected to buy one of every model of camera for instance.

2) and cameras that are stable and work exactly as the protocol states.

3) nothing that causes me or my customers stress or support calls...

Sometimes if there is a problem in the field, it can be very difficult to locate whether the problem is in the camera or the VMS, which is why I am trying to emphasize in my previous posts that the interaction between the camera and the VMS is as much a real thing as the camera and VMS by themselves. 

From the last 10 years of installing cameras, approximately 5000 cameras including PELCO, AXIS, ARECONT VISION, American dynamics, HIKVISION with different VMSes including Genetec, Milestone, Exacq, VideoEdge , Onssi, video expert , I totally agree with what you are saying and yes we do have the exact problems you just mentioned in your article .