Given that camera systems range from 1 to tens of thousands of cameras, the mean is going to be much higher than the median. For example, let's say we had these set of camera systems: 1, 4, 6, 9, 16, 24, 60, 100, 1000 cameras The median would be 16...
Now, I don't think it's this skewed, in practice, but surely the mean is integer multiples more than the median.
John, IMHO, the range of values itself has no effect on whether the mean or median will be higher, as evidenced by the following ideal series
- [1..9] mean = 5 median = 5
- [1..999] mean = 500 median = 500
- [1..99999] mean = 50000 median = 50000
Of course these sets are not representative of camera deployments, for they are skewed with too many high numbers...
But your example suffers the same distortion, although to an even greater degree... Since either intentionally or not, you leave out any duplicates in the lower values. For surely for every single 1000 camera deployment there exists many, many multiples of 1,2,3,4,5,6,7 and 8 camera ones, agree?
It should start out more like 1,1,1,1,1,1,1,1.1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,2,2,2.....
So, assuming even a rough Gaussian distribution with continuous datapoints, I think the duplicate low values will keep the mean under the median. But I can way suck at math and not realize it sometimes. Is this one of those times? :)