Tedor, interesting discussion! Richard, good feedback!
To answer this question, we need to first agree on what we mean about 'resolution'. As you all have alluded to above, there are 2 common meanings:
- Resolving power, i.e., what can be seen
- Pixel count, the number of parts on a sensor
In this industry, almost everyone uses resolution to mean pixel count. And on that count, 4K is certainly 4x the 'resolution' as 1080p. It's a simple output of counting physical parts on a sensor.
What makes this more complicated is that the traditional meaning of resolution is the ability to resolve details, like looking at an eye chart. It does not matter how big your eyes are or how many rods or cones they have. What matters is if you can see smaller lines on the chart.
On that aspect, the resolving gains of 4K range from negative to maybe double. It will likely be negative in low light, as 4K cameras will generally be worse than their 1080p counterparts (both smaller pixels and less sophisticated image processing). It may also be worse in WDR scenes as first generation 4K cameras will not have true multi-exposure WDR. However, in certain wide scenes with even lighting, you might be able to see more details at farther distances, whether that's 1.5x, 2x, etc., is a matter of debate.
There's only one claim which is absolutely false here; That is Simon Lambert's conclusion that 4K will have "four times the demand on your storage and bandwidth" than 1080p. Bandwidth does not increase linearly with resolution. For example, just like 1.3MP was 4x the pixels of VGA, it was not 4x the bandwidth / storage. This is because one can more efficiently compress more pixels in the same space/scene without sacificing quality.