Not really a big fan but Oilers so far
IPVMU Certified | 11/18/13 10:32pm
Hi Alex, Canadian, eh? What part (i.e. which team?!) I am Devils fan.
That, ok Steve
I was with Sanyo Canada before
I understand challenges :)
IPVMU Certified | 11/18/13 09:43pm
Hi Alex, Very good! I think it might be tough to do thoroughly with an "optics lab"... I like your ambition though! Enjoy the tests!
That what I was doing at home
using scope,color chart and NTSC Gen
I actually try to to use simple techniques as to measure white screen level (output from camera) on my monitor while lowering light level in room
It's kinda fun to play :)
IPVMU Certified | 11/18/13 09:09pm
Hi Alex, Yes! Unfortunately it is a very difficult topic to understand - and even harder to explain. :) Sometime pictures can help...
IPVMU Certified | 11/18/13 08:49pm
The best process for choosing among several camera systems is to perform direct comparisons - using the same conditions and criteria for each system. This is where IPVM and it's shootouts becomes so valuable for the industry.
IPVMU Certified | 11/18/13 08:47pm
Hi Alex, our factory is (painfully!) conservative. As you may know, and what IPVM has been stating, there are no industry standards for stating the various video camera "specificatiions", such as Minimum Sensitivity. Our factory specifies Minimum Sensitivity at full frame rate (No DSS), Full Iris Opening, and default AGC Gain (which is not the maximum available AGC Gain), at a video output level of 50-IRE (or IRE Equivalent). This 50% video output level is a bit conservative; I see some "specifications" listed at 30-IRE (or equivalent). Rating a camera block at 30-IRE (or even lower!) provides a better (about 40% lower value) minumum sensitivity than rating at 50-IRE. Another issue, again brought up by IPVM, is the WDR "specification". This WDR "spec" is even worse... I hear people state a WDR "spec" of 100-dB (or even higher). I just smile and repeat to myself - OK, that would be the surface of the Sun to the Dark Side of the Moon (sorry, I love Pink Floyd!). Anyhow, I then ask the person, Can you please tell me what you are basing the comparison to?... God I hate "specsmanship"!!! Oh well, it is just part of the industry. By the way, since there is no industry standard for measuring and quantifying WDR performance, our factory does not provide a WDR "spec", even though our OEMs are crying for one. I tell them, 'make one up', just be sure to describe the conditions and criteria you are using for your "spec".
That why I sad "in simple way" :)
looking forward for your answer about low lux
IPVMU Certified | 11/18/13 08:11pm
Hi Alex, Sorry for the delay... Busy Monday morning with me "real" job! First, being a bit nit picky here but "Even and Odd Frames" should be "Even and Odd fields"... IIt gets very techncal - and confusing when you discuss "Fields" and progressive scan image sensors. The terminology is a holdover from interlaced image sensors and video monitors. Alos, most of the SD NTSC and PAL camera systems are used in analog systems (i.e. Composite Video, Coax, etc.) and display on traditional video monitors, which are interlaced devices. The benefits of using progressive scan image sensors and processing are (a) better vertical resolution, and (b) ability to offer a perfectly still freeze frame. (Unfortunately) There are still a lot of analog security systems in existence, although most sytems (hopefully) will be evolved into IP systems within maybe 5-10 years... Thanbks for the questions!
Steve have ? for you
How does Hitachi measure lux level on IP cameras
Please be specific
So basically if I can put it in simple way
u take nice progressive frame
split it in even and odd frames then combine them to get NTSC interlaced standard frame
IPVMU Certified | 11/16/13 11:16pm
Hi Alex, For our SD NTSC and PAL camera blocks that use full frame rate progressive scan image sensors and full frame rate progressive processing the interlaced video output fields are derived from the image sensor frames as follows: For the first image sensor frame, the output field (Field A) is taken from the odd numbered scan lines of the sensor frame. For the next image sensor frame, the output field (Field B) is taken from the even numbered scan lines of the sensor frame. So basically the unused lines are not used (thrown away). There is also special interlaced output modes, called PsF, both for NTSC and PAL, where both interlaced Field A and Field B are taken from the same full frame progressive output from the image sensor. With these modes, there is no time or motion displacement between the two fields so the images are clear for frame by frame viewing and/or capturing/recording using "Frame Capture" capture devices and/or DVRs. This concept is especially hard to understand (and explain!) as even my Sales managers have a hard time grasping the concepts. Terms such as "Frame" and "Field" take on different meanings between interlaced image sensors and processing versus progressive image sensors and processing. HD and Full HD, by using progressive sensors, processing, and digital output, have done away with this confusion... I hope this helps you. Let me know if you have any follow up questions. One important item I should add is that the output of the camera blocks is NOT necessarily the final output of the IP camera system that uses the blocks. An OEM's driver can use or discard all of the frames from the camera block to suit their desired application.
I kinda was thinking among the same lines :)
How do u convert progressive to interlaced
IPVMU Certified | 11/16/13 04:55am
Hi Alex, keep in mind we have manufactured NTSC and PAL camera blocks for over 15 years and these camera blocks were required to output full frame video (actually interlaced fields) to be viewable on analog security video monitors. When WDR was developed about 10 years ago the CCTV security world was still very much analog and required the same 60-fps (or 50-fps). To maintain the full 60-fps/50-fps output with WDR the combined frames(fields) were repeated twice. Even after the evolution to HD and Full HD (digital output instead of analog output) end users were still "stuck" in the analog NTSC/PAL world of 60-fps and 50-fps. This seems to still be true these days since our European OEMs run our HD/Full HD camera blocks in 50-fps/25-fps modes... Anyhow, because of this continuing trend our WDR implementation with our HD/Full HD camera blocks uses the same dual shutter speed methodology with frame doubling output. These HD/Full HD camera blocks also use frame memory to output repeated frames in DSS mode so that the "full" 60-fps/50-fps frame rate is maintained. By the way, our SD NTSC and PAL camera blocks output interlaced video even though the image sensors output full progressive scan frames which are processed in full progressive frame mode. Of course the HD/Full HD camera blocks are fully digital and output full progressive scan video frames. Regards.
Steve thanks for good explanation
have few ? for you
" This combined image is both output from the camera and also written to internal frame memory. For the next frame, the stored combined image will be output from the camera while the next image sensor "low shutter "
Why would you output the same image 2 times
unless u making two fields to create full frame ?
IPVMU Certified | 11/15/13 10:06pm
Hi John, while I do not know how Axis implements WDR using their DSP, this is how our implementation works. First, WDR is only possible with progressive scan image sensors. The camera DSP processes each complete (progressive) frame from the image sensor. The "first" frame uses the nominal (non-DSS) shutter speed. This would be 1/60-sec for our NTSC camera blocks, 1/50-sec for our PAL camera blocks, and either 1/60-sec or 1/30-sec for our HD/Full HD camera blocks. This "first" frame is written to internal frame memory within the camera block. The "second" frame will use a higher shutter speed (up to approximately 1/4000-sec). The exact shutter speed is chosen to be able to "capture" the bright areas of the video scene. Next the DSP will read back the "first" frame and mix this "slow shutter speed" video data with the second "high shutter speed" video data to form the final image. This combined image is both output from the camera and also written to internal frame memory. For the next frame, the stored combined image will be output from the camera while the next image sensor "low shutter speed" frame is written to internal memory as the process repeats itself. The camera will adjust its exposure (iris level and AGC Gain) during the "low shutter speed" frame to be able to capture the dark areas of the video scene. This implementation of WDR was developed many years ago with our SD NTSC and PAL camera blocks. The "low shutter speed" was never below the nominal shutter speed because the camera had to maintain full output frame rate for the video monitors. This same implementation was maintained when we progressed to HD and Full HD camera blocks. We offer both automatic WDR and manual control over both the low shutter speed and high shutter speed WDR parameters. However, most (if not all) of our OEMs use automatic WDR unless the video scene was completely static which really never is the case for normal CCTV surveillance applications. I hope this explanation is understandable - I know it is a bit complicated and technical. Remember we manufacture camera blocks, not finished camera systems. It is our OEMs that make the finished camera systems whether thay are analog or IP systems. I also realize that for IP camera systems, where full frame rate is not a rrequirement (or desired), a different implementation method for WDR, where the slow shutter speed can be less than the nominal frame rate, is certainly possible. Perhaps that is how some other camera vendors/manufacturers of IP camera systems implement WDR. Also with CMOS image sensors that allow individual pixel control of shutter speed (i.e. Pixim, etc. ) better implementations of WDR will emerge. Should be an interesting future.
"The dual shutter speed method for WDR effectively reduces the frame rate in 2 since the resultant combined image will be mostly of the nominal shutter speed (1/60-sec for NTSC, 1/50-sec for PAL) exposure."
I do not think this is accurate for IP cameras and I am not sure how accurate this is for analog cameras either. One, obviously, in IP cameras, because there is no interlace, you do not need 1/60 or 1/50 (1/30s or 1/25s shutter is sufficient).
My understanding of how non interlaced IP cameras do multi-exposure WDR and keep full 30fps is that the slowest shutter is shorter than 1/30s. For instance, Axis true WDR slow shutter for WDR is 1/44s. This is combined with a fast shutter. I don't know the specific speed but let's say 1/100s. These two combine would still be less than 1/30s, allowing both to be taken and still achieve 30fps. Of course, the downside of not allowing the shutter to be slower than 1/44s is (modestly) degraded low light performance. Does this make sense to you?
IPVMU Certified | 11/14/13 05:16pm
The dual shutter spped method for WDR effectively reduces the frame rate in 2 since the resultant combined image will be mostly of the nominal shutter speed (1/60-sec for NTSC, 1/50-sec for PAL) exposure. The actual camera output is still full frame rate but as you know the combined image is stored in memory and repeated (frame doubling). This IP Camera course has taught me a lot about the "80%" of the market that we are not involed in (our market is high end PTZ camera blocks). As you know I work for a camera block manufacturer, (I know you don't want these forums to be about manufacturers - but I know you can determine my company from my profile...). Anyhow, our latest DSP contains multiple gamma-stretching functions, Automatic DeFog, and a function that stretches the dark parts of the image while leaving the remainder of the image unaltered. We have a block with this technology at our OEMs for evaluation now. I am not sure if you want me to (or if I even can... no NDA...) list further details (model numbers, production timelines, etc.). I don't want the forum to be used as a sales push - that's not my style, or desire. I just though I should mention that some exciting new technology will be mainstream during 2014.
I have never heard anyone complain about 'reduced effective frame rate'. Can you elaborate?
As for those experiments with 'gamma stretching', is that available in production models? If so, which ones?
IPVMU Certified | 11/14/13 04:41pm
Hi John, the whole topic of WDR is very confusing, isn't it. God I hate "specsmanship"...! Anyhow, there are some new technologies being introduced as substitutes for the traditional "dual shutter speed" method of WDR. Many end-users seem to complain about the reduced effective frame rate of the dual shutter speed WDR implementation. (I think it looks weird also!) Anyhow, the R&D engineers have been experimenting with "gamma stretching" and other effects to produce images that provide a "WDR-like" image in real time. We just demonstrated this latest technology to our OEMs at the ASIS Show. I think it has ood potential for scens such as night-time with auto headlights/taillights and license plate recognition. There will undoubtedly be various "marketing" termonology for these new functions since many camera vendors will surely develop/license the process. In any event, the main goal of a "WDR" (or similar) functions is to provide useable video in challenging (real world) lighting conditions. It should be exciting when this new "WDR" technology makes the mainstream...
Jack, this seems to be a trend for Asian manufacturers right now! Innovating new terms for WDR :)
We'll ask Vivotek for an explanation of what technically each version/type does.
That said, we have only ever found 2 types of WDR - 'electronic' / fake and multi-imager / true - see our WDR Tutorial.