There is a fixed voltage drop at the electrodes, which does not produce any light. This drop is about 15..20V and it does not depend on the total arc voltage. Because it does not generate any light, it just causes losses.
With 120V arc lamp this drop is responsible for 15..20% losses, with a 240V arc these losses become just 7.5..10%.
Now you may ask, why the lower wattage lamps are not designed with higher arc voltage? There the thing is, the higher voltage means longer arc. When targeting some exact power, the longer arc means lower arc loading. You may make the arc shorter for the same voltage by rising the operating pressure, but that means the thing has to operate at higher temperature, so more power becomes lost by the thermal radiation of the arc tube body. And that means other kind of losses, plus engineering difficulties to make the thing reliable enough. So there is a kind of optimum arc voltage: If youdesign the arc voltage too low, the electrode drop losses kill the efficacy, when too high the low arc loading and the other losses kills it. That is the reason, why the US HPS lamps have each wattage different arc voltage (S56 and above): The specs were designed so you get maximum efficacy from the given power rating. Now in real life the lamp alone is not the only thing having losses. The big loss contributor is the ballast. And there it makes significant difference, how complex the ballast has to be. The minimum losses are with the simplest series reactor ballast. But this ballast style has fixed OCV (the input mains), so it limits the usable lamp arc voltage to about half of that. Because the mains is fixed, it is the lamp design, which has to be tweaked to suit the series reactor.
The MV specs came from the 30's from the UK, where the 240V mains pretty well matched the optimum for the 250..400W rated MVs. Plus the fact the efficacy vs design arc voltage is quite flat with MV concept, all MVs were designed around a series choke characteristics on 2xxV supply.
This works well for all wattages up to 400W. For 1kW and above the 120V arc voltage becomes more away from the optimum, plus the arc current of 10A becomes quite a stretch for normal installation, so with larger array the installation would be more likely connected to all three phases. That brings other problems: When compensated fixtures are connected into "Y" (so when fed by 230V) and the common Neutral breaks, all the installation uses to go crazy, with spot overloading and similar problems due to series LC (ballasts and their compensation capacitors) tuned to the 50Hz forming in the system. To fix these problems, the compensation capacitors have to be removed from the individual fixtures ans replaced by capacitors connected in "D" across the phases. But with a 3-phase installation, there is very convenient way to connect the load between two phases, so to across the 400V (corresponds to 230V phase voltage). This means there is no Neutral problem anymore, just the feed becomes higher voltage. But as written above, the 1kW and above would actually benefit from higher arc voltage (closer to the efficacy optimum), so these wattages were made designed with higher arc voltages (200..240V), to suit the 400V "across two phases" feed.
As far as I know, the 1kW were sold both with 140V (for 230V feed), as well as with 240V (for 400V feed) versions, the 2kW were only 240V arc (so for 400V feed).
For US environment all MVs need an (auto)transformer ballast anyway, so so no benefit from the lower arc voltage, so the more efficient higher voltage version was primarily marketed there, I don't know if the 140V 1kW has its ANSI code ever assigned.
|