4K is four times the resolution of 1080p. It's double the resolution both horizontally and vertically, and 2x2=4.
See the image below. You can fit four 1080p green rectangles in the red 4K rectangle. Argument over.
Tedor, interesting discussion! Richard, good feedback!
To answer this question, we need to first agree on what we mean about 'resolution'. As you all have alluded to above, there are 2 common meanings:
- Resolving power, i.e., what can be seen
- Pixel count, the number of parts on a sensor
In this industry, almost everyone uses resolution to mean pixel count. And on that count, 4K is certainly 4x the 'resolution' as 1080p. It's a simple output of counting physical parts on a sensor.
What makes this more complicated is that the traditional meaning of resolution is the ability to resolve details, like looking at an eye chart. It does not matter how big your eyes are or how many rods or cones they have. What matters is if you can see smaller lines on the chart.
On that aspect, the resolving gains of 4K range from negative to maybe double. It will likely be negative in low light, as 4K cameras will generally be worse than their 1080p counterparts (both smaller pixels and less sophisticated image processing). It may also be worse in WDR scenes as first generation 4K cameras will not have true multi-exposure WDR. However, in certain wide scenes with even lighting, you might be able to see more details at farther distances, whether that's 1.5x, 2x, etc., is a matter of debate.
There's only one claim which is absolutely false here; That is Simon Lambert's conclusion that 4K will have "four times the demand on your storage and bandwidth" than 1080p. Bandwidth does not increase linearly with resolution. For example, just like 1.3MP was 4x the pixels of VGA, it was not 4x the bandwidth / storage. This is because one can more efficiently compress more pixels in the same space/scene without sacificing quality.
Very interesting discussion.
When I'm testing the difference between 2 displays (1080P and 4K) the main factor is the amount of pixels.
A 4K monitor (most of them) have 4 times pixels then 1080P monitor.
For ex' – 1080P from certain company have 1920 by 1080 (2,073,600 pixels), while a 4K have 3840 x 2160 (8,294,400 pixels)
So isn't it a simple mathematics?
In the comments of that article, Lambert shows that he is simply basing resolution purely as pixels, not about actual image performance.
Excerpts from the comments:
- First, he says, "Obviously comparing 12.7px/cm with 6.35px/cm we can see that this 4K image gives twice as much detail in each centimetre of the person's height than does the full HD image. So, this is twice the resolution, not four times."
- Then when questioned he agrees that, "The resolution of the FHD camera (in px/cm or px/ft, etc) could be extended by a 4K camera to cover an area twice as high and twice as wide so this would mean four times the area is covered."
Unfortunately, he totally misses the other elements that go into producing an image quality beyond mere pixels (like ability to handle contrast, low light, etc.). In this case, it is a huge deal, as 4K cameras will be much worse in these areas (at least for the first generation or two of 4K).