The outstanding performance of HPS for outdoor lighting does not come down only to Lumens, but to our eyes and brain ability to complete the full image from an otherwise very unbalanced input (a scene lit by non uniform, near monochrome, orange light)
(Incidentally, those is the very same reason why LED outdoor lighting achieves the exact opposite for our eyes, but might actually perform well for a video camera)
Dynamic range :
Our eyes have very wide dynamic range. Every time you look from an outdoors location under bright sunlight (100000 lux and higher) into a dimly lit lobby of a building through an entrance door (can be as low as 100 lux), you are seeing at the same time two areas with 1000x difference in the average light levels, and able to perceive what is it that you see in both of them
Under HPS (or really pretty much any) outdoor lighting, there are areas under the light which are brightly lit, and areas away from the light that get dim lighting. HPS does not ruin night vision, therefore it allows us to make use of huge dynamic range
Video cameras of the 80s have awful dynamic range. Even today's cameras are a far cry from the capabilities of our eyes. It is possible that filming under HPS would capture a scene which is well lit or even clipping directly under the light, but have an abrupt cutoff area beyond which it is unable to capture
Under Mercury light, the light in the area to be filmed have possibly higher uniformity and lower intensity, allowing the camera to see a wider area correctly within a single brightness setting
Color rendering :
The tiny amount of blue and green in HPS spectrum is all it takes to provide us with full color vision
Our brain understands that the light itself is of orange color, and extrapolates from the little blue/green the eyes see to understand what each surface in the scene would look like under natural light. Then, it is matched to things we are familiar with as reference
A video camera does not do any of this. The image is quantified into 3 color channels of an image sensor (which sensitivity spectrum is completely different from any cells in the eye), each of them is stored on and read from media (with or without correctly matched gain), and finally displayed on 3 color channels of a screen (with light spectrum that have nothing in common with the original spectrum at the scene, or even with the camera sensor)
Now, instead of the 589nm peak of Sodium light + little more info in other areas of the spectrum, we are presented with a scene with "the same colors" but is actually recreated either with wide emission spectra of Green and Red phosphors (CRT), or is filtered down from the emission of Fluorescent or LED (LCD), or UHP Mercury lamp (in the cinema), made from data already corrupted by the camera
Our brain won't know what to make of it, and either won't, or won't successfully, do what it can with the real life scene
And provided the awful dynamic range, the blue and green data will probably be nearly fully lost already at the camera anyway
With Mercury light which is white to begin with, there is much less to go wrong here
|