Optical Zoom In Analog Vs IP Cameras

is the optical zoom in IP camera the same in analog camera or not ? i mean i make optical zoom as 22x by analog wonwoo camera it will show me same disctance if i use 22x but by using IP camera AVer?

if it is possible to explain difference

Hello Mohammed:

They key here is to ignore the zoom power number, ie: 22X.

That number is a multiplier based on the widest field of view vs. the most narrow (telephoto). This slide from our IPVMU class on PTZs directly addresses this point:

For example, the 'Horizontal angle of view' for a camera might be 55.8° - 1.7°. This results in a 'zoom number' of 34X: (55.8° / 34 = ~1.7°)

Because 'zoom power' is a ratio, you could potentially have two cameras that both claim 34X zoom that are widely different. For example, I could have a camera with a 'horizontal angle of view' range of 84.0° - 2.5° with the same zoom number of 34X, and with everything else equal it would not zoom as far.

So to answer your question: Resolution and angle of view give you the answer, not zoom number. Even when comparing analog and IP PTZs, we start there.

This update "How far can a PTZ see?" is especially useful here, and I'd recommend reading through it!

No. Knowing that the lens is a 10x or a 20x only tells you that it magnifies things ten times or twenty times. It doesn't tell you ten times what. A 4.8mm~48mm lens and a 10mm~100mm are both 10x zoom lenses but they'll both show you very different things. A 50mm lens on a 1/4" sensor will show you something very different than a 50mm lens on a 1/3" sensor.

Ari, if I may, while you're talkin bout zooms and sensors and such notions, I'd like to ask what I'm sure is gonna be a dumb question but I cant for the life of me figure it out. I bought a Tamaron 10x zoom lens a while back that I tried out on a stardot 5 mp camera, it worked ok but I ended up using a different one. The question is: How come I cant put my eye right about the distance the sesnsor was from the end of the lens and see thru like a little telescope? If i put my eye back 10 inches or so i can see upside down image, but shouldnt my eye be able to take the place of a sensor?

The imager size of your eye is different than the imager size of the camera?

Not to be funny but, how do I measure the 'imager size of my eye'? Is it just the pupil?

Sorry I couldn't get to this yesterday! I wasn't completely sure of the answer, so I contacted Stu Singer of Schenider Optics to clear up a few points for me.

It boils down to the fact that the human eyeball isn't very much like a camera at all. First, the "sensor size" is something in the neighborhood of 22mm to 24mm, which is almost 7/8" to just over 15/16". Second, that sensor is, as Carl says below, actually *curved*, which means any lens designed to focus light onto a flat sensor is going to act all wiggity-whack, to use a technical term.

Another difference between eyes and cameras is the fact that, instead of the "pixels" being spread out evenly, they're mostly concentrated in the center of the eye. This spot has the highest resolution, and contains only pixels capable of seeing color; the rest of the eye contains pixels capable of seeing color and pixels capable of seeing only black and white. This means that while the eye has an extremely wide angle of view (about 130° to 160°, with both sensors combined), only the center of the image is in high resolution (about 52 megapixels or so). The rest of the image is going to be in lower resolution, and in black and white (down to as low as 5MP). In other words, while you can read a book in the center of your field of view, the edges of your field of view is mostly good for large objects. although it's excellent at seeing, and quickly reacting to, motion, so you don't walk into buildings or get eaten by a lion while you're reading.

As you can imagine, bandwidth, processing power, and storage are the most limiting factors in getting a signal from your eyeballs into your brain, turning them into useful information, and archiving them. 130 million sensors in the retina are trying to send video to the brain via 1.2 million fibers, so less than 10% of the possible potential data goes to the brain in any case, not that the brain is capable of processing that much data even if the bandwidth were increased. The data is then sent to the graphics card for processing; this compares the images from both eyes, assembles them into a 3D image, sends interesting images to the subconcious brain, which then activates the brain's analytics package for image comparison, recognition, and analysis. The subconcious brain then strips out extraneous data and sends a portion of the image to the concious brain for recognition, further analysis, and decisionmaking. The graphics card is also responsible for making constant minute adjustments to the center of the eye to refresh and widen the cone of interest (the 55° angle packed with the majority of sensors).

More reading here and here.

Congratz Ari! I must reluctantly return my Golden Nerd award, back to whence it sprung. :)

What? Science isn't nerdy! Oh, wait.

The other thing he does't mention is that your eye's "sensor" is already set back a good inch or so from your cornea, where a lens is only designed to sit 1/8" to 1/4" from the sensor... in other words, holding the back of the lens the same distance from eye as it would sit from the imager, will only focus the image on your cornea, not your retina.

Keep in mind that your eye is different from a sensor. Your retina is the equivalent to an image sensor but you have the lens of the eye in between. And, unless your eyeball is really distorted, your retina is rounded whereas an image sensor is flat.

Peering through a lens as if it were a telescope will be quite a bit different from your camera using that same lens, because your refractive path will also include your eye's lens, but the camera's refractive path won't.

To get a sense of how a lens might work with a particular camera, you might place a flat scattering media such as a piece of ground glass or a thin sheet of paper in place of the imaging sensor. From the back side of it, you can view the light that the lens projects onto this surrogate focal plane array. Then the comment might read,

"I put my imaging plane about the distance the sensor was from the end of the lens and see..." well, you should see about what the focal plane array would see. In your mind's eye, crop that projected image to match the area of the focal plane array sensor you expect to be using.

The weakness to this approach is, it's awfully hard to get a sense of sharpness and quality from a projected 1/4" or 1/3" image. I'm afraid there's not much that can compare to actually seeing that camera/lens combo's video on a decent monitor, which brings us back to John's Point Source suggestion.

Peering through a lens as if it were a telescope will be quite a bit different from your camera using that same lens, because your refractive path will also include your eye's lens, but the camera's refractive path won't...

Sounds right.

Do you mind completing the exercise and explaining what the telescope has that the zoom lens doesn't? To the laity they might seem similar, i.e. a tube with a couple of lenses on both ends that can move, and probably some other ones in between, the purpose of which is to make stuff look closer...

From what your saying maybe the zoom lens is a subset of the telescope and you just need to add the final ground glass?

The basic telecope has two lens systems. The objective lens is farthest from the user. It collects all the light and focuses it onto the image plane. However, since the human eye has its own unique refractive properties, the eyepiece or ocular lens (closest to the user) refracts that light so that it won't focus on the imaging plane, but instead will focus on your retina after it goes through your own refractive system.

If you remove a telescope's eyepiece, you should be able to hold a piece of ground glass or a thin sheet of paper at nearly the same distance to see the image projected upon it.

Bottom line:

Without the ocular lens, the telescope can easily focus the image onto a focal plane such as ground glass or a camera array.

With the ocular lens, the telescope accommodates the human optical path for easy viewing through the eye.

P.S. this is true of microscopes as well. A trinocular microcope has about the same path length from the objective lens to the binocular eyepieces as to the focal plane array, but the ocular lenses are on the human end of the instrument, while the focal plane array needs no further lenses because the image is directly formed upon it.

Optical zoom in IP and analog cameras are fundamentally the same, except for one major differentiator - resolution.

The ability to 'see' things far away is dependent on the focal length (i.e., all things equal a PTZ with max focal length of 100mm will 'see' farther than one with 50mm). However, the greater resolution of IP MP PTZs allow them to see farther, even with short focal lengths. See: SD vs. HD PTZ Shootout

Thanks to all for your efforts I was just ask becasue I was afraid the the depth of view for the 2 camera will be very different or not

The depth of view is an entirely different thing. That has mostly to do with the ammount of light there is availible.

ok, i understand that i must focus on focal lens and resolution not no of optical zoom so i wonder now how can i know field of view for camera by know it is focal lens

Wow! How amazing is the human body with all the interacting "hardware" !!!!

Now think for a minute about an Eagles eyes. What a wonderful and amazing world we live in!

And then there is our sense of smell and how immensly greater is that of our canine best friends, around us all the time.

Thanks for the questions/answers comparing our eyes to ip cameras. We humans take so much for granted every day.