How many surveillance system buyers are in their mid 20s? I am not trying to belittle the question, it just strikes me not being very common.
To the extent that it happens, I would say something, "You know the VCR at gradma's house, that's like analog" or "You know how when you watch really old YouTube videos, like from way back in 2011, that's like analog."
Of course, I am omitting the NTSC / PAL SD analog vs HD analog, but I am presuming the question is more about the old legacy stuff.
IPVMU Certified | 12/08/14 03:53pm
Try this easy analogy:
Analog is a direct representation of the pictures and sound, like the way you can see a speaker wiggle when the volume gets loud, or the way a ripple moves across the water when you throw a rock in a pond. The recording, storing, transmitting and reproduction of those waves can never be 100% perfect, and get rounded-down and diluted pretty quickly. The level of detail originally captured (resolution) is limited by the equipment involved in every step of the process. These recordings are essentialy stored miniatures of the original, with distortion and noise messing with their accuracy at every step of the process.
Digital, on the other hand, is a coded representation of the original, like the dots & dashes of Morse Code is an interpretation of an orginal communication. If it is accruate, it stays accurate. No matter how it is stored, transmitted, or reproduced, the code's on/off bits are either right or wrong, not smoothed or rounded-down. There are far fewer places for noise or 'close enough' interpretations* to reduce the accuracy of the code. New technologies, whether MP3 files stored on a CD instead of grooves in vinyl albums, or MPEG files on a DVD rather than videotape, are all about holding the 'coded representation' accurate to the original rather than a miniature interpretation of the original. Digital cameras have run away with the level of detail captured by the imager because 1) it can handle the detail, and 2) it is where all the interest and investment is.
(*compression, etc, obviously bring in distortion, but that's too much information for the layman)
IPVMU Certified | 12/08/14 04:03pm
Analog - Vinyl record
Digital - CD
Not perfect, but usually close enough.
IPVMU Certified | 12/08/14 04:10pm
Analog = Pulse of electricity
Digital (IP) = Stream of data
There are a number of analog vs. digital appliances, most are obsolete:
Record vs. CD
35mm film vs digital camera
a clock with sweep hands vs. a digital display
VHS vs Blueray
Analog meter display vs. a digital display
Then you can explain signal to noise ratio vs. bit error rate.
Well the original vidicon tube cameras formed an image, and encoded the images without ever digitizing them. So no A/D. I can see where solid state analog cameras get into some gray areas. For an analogy, if she ever saw an old car radio with AM and FM, that was analog. Satelitte radio is digital. Good luck!
IPVMU Certified | 12/08/14 05:18pm
No matter the examples of analog (old) and digital (new) devices, the difference boils down to this:
- a fluctuating signal with the swing of signal representing amplitude (strength, volume, saturation, etc) and the speed of fluctuations cooresponding to frequency (pitch, color, etc).
- Analog means 'analogy', or a representation of something.
- It is like a physical blueprint, and will lose detail as it gets photocopied repeatedly.
- a coded record of something (music, moving pictures) where the on/off code describes something (this loud, this color, this change from before, etc).
- It is a digitized description of something, like an original CAD file of the blueprint above.
IPVMU Certified | 12/08/14 05:25pm
I wouldn't use analog vs digital as a selling reference. It isnt worth the effort to explain and really even an analog camera today goes analog -> digital -> analog -> then converted to digital at the recorder no? I use terms like traditional or standard definition. I also dont use digital video recorder or network video recorder and instead just use recorder. IMO the things the customer needs are how good and useful is the video they are likely to get. This can be explained without education on the difference of analog vs digital.
For the record though, being in the security field as well, we have had to over the years explain analog vs digital as the nations POTS system slowly fades away. The easiest way I have found to explain this is by explaining that as information is sent in an analog system everything has to be in order. If information is lost because of noise or some other interference analog cant figure out what is missing and must start over. With digital this order does not matter and the recieving end is smart enough to rebuild information and put it in order, even if it was recieved out of order and it can request missing information.
Not perfectly accurate but usually getst he point accross.
Did you try explaining analog as a variation of a wave, and digital as a variation of 1's and 0's?
Now that there has been several decent responses to the 'basic questions', I wanted to mention a common, if slightly technical misconception regarding A vs D in general.
One would imagine that if after soaking in the various answers and resolution analogies, Luke's friend probably would get the impression that digital has a higher informational capacity (greater bandwidth) than analog. Though that's not the case at all. And Michael is right when he says:
Digital, is a coded representation of the original, like the dots & dashes of Morse Code is an interpretation of an orginal communication. If it is accruate, it stays accurate. There are far fewer places for noise or 'close enough' interpretations* to reduce the accuracy of the code.
but this 'coded representation' also comes at a significant price in bandwidth, precisely due to the drastic reduction of gray areas in signal semantics, i.e., today's typical bit encoding adds a lot of room in between values to reduce the error rate.
It's similar to writing a number in Base 10 (what we all use) vs Base 2 (binary). For example 15835, expressed in base 10 is simply 15835, in base 2 it's 11110111011011. Obviously base 10 is far more concise than base 2, but if you photocopied photocopies of both representations a few times, you would likely be able to reproduce the degraded base2 digits more accurately than the degraded base10 ones. Mainly because you only have the two choices, and if you can tell it's not a zero, then it must be a one... This is an oversimplification of course, but hopefully the general idea comes across.
In general computing we value error-reduction far greater than bandwidth utilization, since a spreadsheet copied even 99.99% correctly is virtually useless. Video on the other hand, no big deal. That's why Analog transmission of HD is viable. It's also the reason that Digital can use lossy compression algorithms and get away with it.
So digital sacrifices bandwidth for error-reduction...
I also like to explain that with analog, if you lose signal you get noise or static. We all are used to this from old cell phones, snow on TV, noise on a record. In a digital world, your cell phone drops a few seconds, your tv shows garbled/colored squares or loses the signal all together. Analog our brian could usually figure out what the data was with all of the noise (maybe squinting at the tv). With digital, you usually get nothing recognizable until the signal refreshed.
Kids nowadays won't understand if you say there is snow on a tv. I love all of the movies with high tech tv or CCTV, but still show snow, because it is more recgnizable than "NO SIGNAL".