Axis Laser Focus PTZ Tested (Q6155-E)
A few thoughts:
- For an industry that rightfully complains about the lack of innovation, this is a real innovation so it's disappointing to see meager overall interest in this so far.
- But I suspect it's related to Axis marketing of this. The choice to emphasize the technical element of 'laser focus' hurt it rather the operational outcome of 'super tracking' or 'super details' or 'up to 100% greater clarity'. People find 'lasers' interesting but it is not clear to most what using a laser to focus delivers. I think it would be better to flip it, lead with what it delivers and then secondarily market is as 'laser focus' as a supporting element.
- Related, Axis accompanying marketing video wastes time and does not show enough of video quality comparison. The first minute is literally just preamble, most people are literally going to 'bounce'. Even if they don't, there's only a few brief examples for the rest of the video.
This is an interesting advancement, and I look forward to the day that all autofocus cameras use it.
My thoughts on this particular Axis... I think it's a little fugly, but some people must like the snow-globe form factor. My dog also shakes off when wet, and that isn't too effective either.
For an industry that rightfully complains about the lack of innovation, this is a real innovation so it's disappointing to see meager overall interest in this so far.
Though overall interest in PTZs has been falling for years, which might be a factor.
The choice to emphasize the technical element of 'laser focus' hurt it rather the operational outcome of 'super tracking' or 'super details' or 'up to 100% greater clarity'.
While I agree the marketing was not compelling, I have to say, for me personally, vague, unfalsifiable claims like 'super details' barely register. Whereas, if I read 'laser focus', there better at least be a physical laser beam involved somehow.
...most people are literally going to 'bounce'.
'literally' used in the figurative sense, of course.
for me personally, vague, unfalsifiable claims like 'super details' barely register.
You're not normally and I say that in the most complimentary way possible....
Laser focus appeals to those interested in technology but not as much to those who are interested in business or operational results.
You're not normally and I say that in the most complimentary way possible....
I'd like to think I'm relatively normally ;)
Little interest probably because PTZs are on the decline. I don't know if I would call vibrating the camera for 30 seconds very innovative. I would stick a Dyson Air Blade in the top of the dome, rotate the dome around and have it dry within 5 seconds.
Continuous focus is a great improvement however.
I don't know if I would call vibrating the camera for 30 seconds very innovative.
Straw man and you know it. What we are saying is innovative here is the laser focus. We actually criticized the wet dog / speed dry feature.
I do agree that PTZs are declining (our stats show that), but the people still using PTZs are mostly serious about having operators actually using them to track people and this feature (laser focus) makes that fundamental functionality much better.
How is the low light performance in comparison with the Q6114-E?
It is an interesting concept and delivered with Axis's usual quality of manufacture to back it up (and price tag!) I would like to see this tech drift down to turret type cameras (like a Mic-500) where the mechanical design is more suitable for this type of technology.
At the end of the day its just a golf pin range finder innards hooked with a look up table!
Excellent product, is an Axis, now I was curious, other PTZ with IR taking from 10 to 15 sec to stabilize the focus? Of course, would not the low-cost sensors be used by certain manufacturers? We tested 2 models that we distributed, both with IR, one of them, it takes between 3 to 6 sec for the focus, already warned by the manufacturer, so the product is cheaper. Ja the cutting edge product, with the better sensor, the PTZ with IR, the focus is very fast, alias does not take 2 sec for the focus to be set.
We have our first Q6155-E PTZ in the field for about 2 months now. It delivers and makes a significant improvement in moderate to low light urban environment. It was a great success with a local PD in an urgent situation. They will continue to request this model as we build out the network.
I can tell you from experience this is probably the best PTZ I have seen considering no IR.
You forgot to mention the critical point on the reasoning behind the globe style housing this uses. It allows the camera to see 20 degrees above its horizontal axis. While not always needed, think how many times PTZ are mounted on roof tops to look down, facilities managers can now use this to look around there roof top and see more than the deck to check on antenna towers, air handlers, people, etc. Same for malls and airports where often mounted on first floors this camera can look up more than others to see 2nd floor activity.
Another feature is that you can adapt a 360 quad cam AXIS Q6000-E Mk II to it for complete 360 immersive viewing right onto the housing.
I have sold a considerable amount of these and so far the customer has been very happy. Plus it offers 60fps.
I'm sorry, you cannot use a Q6000-E with the Q61 series cameras.
The mounting assembly fastening the camera to the bracket is not the same.
Q6000-E implies it is meant for Q60 series cameras, not Q61 series.
Another feature is that you can adapt a 360 quad cam AXIS Q6000-E Mk II to it...
Sorry, I read attach, not adapt.
Still, how is it a "feature" if it requires adaptation with third party equipment?
Not 3rd party, its an add on to the Q series by Axis direct: https://www.axis.com/global/en/products/axis-q6000-e/
Used with the adapter kit: https://www.axis.com/global/en/products/axis-t94a01c-attachment-kit
No, I don't work for axis, but this is a bad a** camera:
watch the video: https://youtu.be/txAJjSm7jOU
Thanks for pointing it out, I stand corrected.
I've never heard about this attachment, and when I looked for it on Axis's website I didn't see it as it was listed all the way down under Miscellaneous. I expected to see it with the other brackets.
It's pretty new - they demo'ed it at ISC west a few months back and I believe just became available for ordering maybe a month or two ago. I agree that the Q6000 setup is very slick though.
We shot a quick demo video of the uptilt:
Other cameras do 10-15° uptilt, as well. I recall being fascinated by it when Sony released their HD PTZs in 2007-2008ish. The difference in the Axis dome (according to Axis) is that there is no seam between the flat and curved part of the dome, since the whole dome mechanism is spherical, so there's no loss in quality where those two parts are welded together or stretched from molding in other cameras. Anecdotally, it does look better than what I recall some others looking like, but we haven't explicitly tested it here. We may revisit it in the future.
Also one side note about 60 FPS capability: the Q6155 will do 60FPS, yes, but only at 720p. Max framerate on 1080p streams is 30.
I'd appreciate your thoughts on this - How acute is the problem of rain drops in surveillance applications? Axis must have done a marketing analysis that ended up justifying the R&D NRE to take this solution to market. Are you guys hearing from your customers that they will pay extra for a solution that mitigates video degradation due to rain drops?
Axis must have done a marketing analysis that ended up justifying the R&D NRE to take this solution to market.
Maybe, but I do not think this is a given. I have seen enough instances of engineers doing things for fun/experimentation that are later turned into specific 'features' by marketing. This does not mean it is a bad thing.
For the shake-dry feature, it is mostly just taking existing capabilities (ability to move the camera) and combining them into a 'feature'. I doubt that this took a whole lot of R&D NRE, and in fact it could have been discovered by accident (auto-tracking code that moved the assembly too quickly trying to acquire a target and someone saying "look how it shakes the camera, that would help shed water droplets, let's call it a feature!".
The laser focus was far less likely to be a happy accident, and that probably involved some meetings and research before deciding to build it, but shake-dry has always impressed me as a side-effect kind of feature addition.
I too, suspect Brian is correct. But that is for sure, exactly how a lot of really cool (or important) things have come to pass in the last couple hundred years, eh?
For someone who has an a substantial amount of PTZ experience over the years, I find the value of the 20 degree uptilt to be the better feature and whose value is underestimated. I also find the current general inclination of the industry to de-emphasize or write off PTZ's to be, at best, premature, and at worst, short sighted altogether. Pixel density, pixel management, storage compression, analytics, target tracking, (to mention a few) on the MP side are all gonna have to get a lot better before some of my clients feel comfortable dropping PTZ's as a solution and replacing with wide angle MP FOV's. Seems like alot of mfrs are pushing too hard to count them out....
Also, yes the AXIS PTZ is fugly but, I practice pragmatism as a second religion, so I'll take function over form any day.
I doubt that this took a whole lot of R&D NRE, and in fact it could have been discovered by accident (auto-tracking code that moved the assembly too quickly trying to acquire a target and someone saying "look how it shakes the camera, that would help shed water droplets, let's call it a feature!".
Are you kidding me? You mean the thing doesn't have a dedicated auto-kinetical hydro-dispersal module? ;)
With traffic control when most problems happen in poor weather, they try hard to keep thier cameras running clear.
How acute is the problem of rain drops in surveillance applications?
Skip, I can only speak anecdotally but it comes up every so often from our discussions, enough so that we did a Rain Camera Shootout a few years ago to analyze this.
Thanks for that reference. Some good insights are provided. One of which is that dome type housings (similar to this Axis spherical one) are most susceptible to rain drops. I agree with Brian's suspicions the shake-dry feature could have been backed into, but there is also the fact that this product is active. And if rain gets on the laser faceplate that will probably shut down the auto focus feature. This may also have been a driver for a means to get the droplets off the housing.
And if rain gets on the laser faceplate that will probably shut down the auto focus feature. This may also have been a driver for a means to get the droplets off the housing.
Remember though that this feature was first released on their 4K ptz, over 18 months ago, as shown here in the infamous, voyeristic "Miracle on Ice" video:
I beliee the "speed dry" feature has been standard on all their Q61 series PTZ cameras (all the ones with that design - not specific to this particular one, or the 4K one)
That's correct. It was announced in the Q61 press release. We covered it in our 2015 ISC West show directory, too:
Whatever happened to the rain-wash coating that Panasonic showed off a few years ago or CleanView from Digital Watchdog? I know that they would wear off if touched (I'm sure no technician would ever touch the acrylic during installation...) and even without that they were supposedly only good for 7 years, but it seems having a water repellent on the bubble made sense and that by now someone would have come up with a better equivalent. Just too impractical?
Acrylic and Polycarb bubbles are notoriously difficult to keep clean and free from rain drops. The build up of static doesn't help either. In over twenty years in the security biz, the most effective / cheapest deterrent i've found so far is Mr Sheen with Beeswax, applied with a soft cloth every 6months.
The trick of it, is to break the surface tension of the droplets to allow them to run off, The Beeswax component works in the same way Rain X does on car windshields.
Mechanically shaking a dome head in the way Axis do cannot be very good for their MTBF figures!
Interesting article! I'm surprised that regular auto-focus doesn't perform better. I'm not an expert in low-light focus but to me, the take-away is that within 18 months, competitors should emerge that simply apply Deep Learning to improve the auto-focus, without any laser.
competitors should emerge that simply apply Deep Learning to improve the auto-focus
What type of latency would such a real-time deep learning auto-focus adjustment process entail? My concern would be that if a camera is tracking a moving person, it would need to analyze extremely quickly to keep up. A laser is presumably traveling at the speed of light or somewhat close to it which means latency is extremely low. Thoughts on latency for deep learning in such an application?
Deep learning on a camera is still a bit down the road due to the processing requirements. As long as cameras have to be cheap anyway.
What exactly do we think the laser is doing? My guess is that the video triggers on motion, and the laser spot subtends essentially the same field of view as the visible channel (since the laser doesn't move within the housing). It would seem the laser is just a direct detection range sensor, and then the range is fed to the camera's servo auto focus via a control loop of some sort.
If that's the case, then perhaps some type of video analytic could eventually serve up the range instead, but it would never be as fast, accurate, or probably as cheap as a dedicated commodity laser range sensor.
I agree with you except for this:
"If that's the case, then perhaps some type of video analytic could eventually serve up the range instead, but it would never be as fast, accurate, or probably as cheap as a dedicated commodity laser range sensor."
Deep Learning chipsets will likely get commoditized really fast. Within 18 months there will be new cameras coming to market with them. The typical use case for this feature is a person roaming around a parking lot. The rate of change of focus will be minimal. Running an DL-enabled auto-focus algorithm once or twice a second will be sufficient to achieve most of the gain demonstrated by this laser range finder. Silicon will always be cheaper in the long run than dedicated hardware.
BTW the hard part with Deep Learning usually is getting good training and validation data. With this Auto-focus problem, this would be trivial, with any off-the-shelf PTZ.
Running an DL-enabled auto-focus algorithm once or twice a second will be sufficient to achieve most of the gain demonstrated by this laser range finder.
Can you elaborate on this? Take the scenario of a person walking and the PTZ being controlled, which is a common PTZ scenario. With one or at most twice a second this will be enough? What is the risk that by the time the DL has processed its image, that the person and/or PTZ will have moved?
John, you have a good point which I didn't initially think about. But during very rapid Pan-Tilt movement the operator probably has no expectation that the focus will be better than the current auto-focus algorithm, and once mostly fixed on a slow-moving object, I think that running the improved P-T algorithm at a low frame-rate would be sufficient.
Also, chances are, filters used in current auto-focus algorithms could probably be improved by Deep Learning, at the same cost in processing used today.
I don't mean very rapid, I just mean typical small steps.
How does this deep learning algorithm work for focus? Walk me through it. It gets an image and the DL says what? Judge its focus / quality? Then it gets another image? Judge that focus / quality? Pick which one is better. I am trying to understand how many images does the DL need to go through to figure out which one is the highest quality? And related to that, how long does that take? And it keeps repeating that again and again or?
> How does this deep learning algorithm work for focus? Walk me through it. It gets an image and the DL says what? Judge its focus / quality?
OK, I'll try to do it briefly, off the top of my head.
During processing of live images, the DL autofocus algorithm would take the same input (image, current focal length) and generate the same output (absolute or relative focal increment/decrement, e.g. zero if currently best focus) as an old-school autofocus algorithm.
The difference is that during the training phase, the DL autofocus would automatically learn optimal filters (potentially much more complex) that will outperform in less-ideal-lighting-conditions the simple image filters and custom algorithms used typically.
To collect the training data for the DL autofocus with minimal effort and with minimal human input, one could develop a relatively simple script to automatically generate the training data by first placing a regular IP PTZ in an environment that has moderate movement and bright lighting, e.g. in a shopping mall. Let the regular autofocus pick the best focal distance. Record the image and the focal distance. Change the focal distance forward/backward at different increments (deltas). Repeat with much lower exposures, e.g. 1/8000s - to simulate night time. Record the resulting images and the deltas. Repeat for different Pan-Tilt-Zoom values, randomly or by sampling. Physically move the PTZ in a different environment or repeat across lots of PTZs in different environments. To control for movement in the scene, compare before/after pictures and throw away the samples that had motion. This shouldn't take more than a couple of days to collect enough data to test whether this works, and let's say a few weeks to collect enough different environments for a robust final result (e.g. outside, different weather patterns, etc).
Once you have all that data, you simply need to run a typical DL training process, e.g. using TensorFlow. A simple CNN with a 3-4 layers should outperform the old-school autofocus algorithm and should learn how to compensate for the low-light. (Any architecture designed for ImageNet would probably be a good starting point, e.g. VGG.) This is the art part of DNN design that requires practice but isn't really that hard.
This basic process can be improved many ways, e.g. by running a better focal quality algorithm based on frequency distribution, by using a laser to find the ground truth (only during data collection stage), etc.
Considering how straight-forward this is, I wouldn't be surprised if there are already companies doing it, e.g. in mobile phones where the market is much bigger.
and the copy cats have just released theirs.