Thanks for the detailed explanation Medved.
Just one point then that I don't fully understand, is why the American lamps are not manufactured with the same tight control on arc tube voltage as the EU versions? From your explanation about the different circuits, I would have thought it even more critical to keep the arc voltage within tight limits on CWA gear.
There is a difference between arc voltage tolerance and it's dependence on the input power variation.
The previous described, why the US lamps have to maintain their arc voltage ideally independent on the actual power to prevent thermal instabilities, but it does not mean the arc voltages could not vary due to other reasons.
To maintain the arc in the AC circuit, you need two conditions to be met:
At first the OCV measured as first harmonic have to be higher than the arc voltage first harmonic, so the current would flow and deliver power into the arc.
Second because the current is AC, it have to change polarity at some time. That mean the current first decay to zero and then it could be reestablished in the other direction. But with mains frequency there is a problem: The arc essentially disappear when the current drop to zero, so after the polarity changes, you have to essentially reignite the arc again, so the current could flow again.
For the reignition you need hot electrodes (quite easy, they are already warmed up and they have the inertia) and some extra voltage to "repopulate" the arc with ions.
Now this voltage have to be above the arc voltage. The arc voltage is in fact an equilibrium point, when the new ions are generated at the same rate as they decay. So having the voltage equal to that mean the ions won't populate. So you need there something extra. And that mean higher the arc voltage, you need higher voltage for the reignition as well, the "extra" margin for the ion breeding should be always there. So higher the arc voltage, higher voltage should be available in the ballast for the reignition.
Now with the series choke, the only voltage you have is the mains voltage. Given the phase angle conditions, this lead to the generic limitation of the maximum usable arc voltage to be half of the mains rms (higher power arcs are more stable, so the arc voltage could be a bit higher, but it is still not that far). For European mains it mean ~110V arc voltage at all times over the lamp life. As the HPS voltage rise over it's life, the initial arc voltage should be lower. Higher initial voltage would mean the lamp reach sooner the 110V EOL limit, so it's life would be shorter.
Now creating a lamp with too low arc voltage (to boost it's life) is not a winner too: It mean ,lower efficacy, higher current, so higher ballast load, so when the voltage would be below what the ballasts are designed for, the ballast would overheat. That mean lower ballast efficiency as well. At the same time lower arc voltage mean (practically on all wattages, maybe except the lowest ones) lower efficacy, so the system efficacy would be severely compromised. So the standard set a minimum value (as a trade off balance), what all ballast have to be designed for and what no lamp should cross. Now as makers want maximum life from their products, they are pusheing the voltage tolerance range as close as possible towards the minimum limit.
The CWA can tolerate way higher arc voltages without the danger of extinguishing it, because the series resonance effect (it is in fact a series LC operated below the resonance) keep the capacitor charged about at the peak voltage during the current zero cross. So for the lamp reignition, there is not only the (transformed) mains voltage, but it's sum with the capacitor voltage, so aboput double voltage compare to the series choke, what is available to the lamp to reignite the arc.
Practically this mean the arc reignition is not anymore the limit for the arc voltage. So the arc voltage could really approach the ballast OCV, so it extinguish only when the ballast is not able to feed any power.
The other limit is not as hard as well: As the ballast keep the current constant over wide range of conditions, the lower arc voltage does not present such risk for ballast overheating, so lamp designs may go safely as low as their designers wish (there is the efficacy and ballast efficiency penalty with doing that; this is, how the "energy saver" MH's reach the lower power input).
As the limits are by far not that strict, the makers are not bound to so tight voltage tolerance as they are for the European market.
Of course, to keep some margin for the ballast to compensate the main variation, the target arc voltage is about the half of the OCV, but it is by far not that critical as with the series choke only.
So if I could sum up:
As you can not get all sweets, with lamp design you have to make compromises. And one of the copmpromise is the achievable lamp-to-lamp tolerance range, vs thermal stability.
For EU market you need to tighten the lamp-to-lamp tolerances, but you don't care as much for the thermal stability, you may use the saturated vapor concept to it's largest extend (pressure is a function of temperature and nothing else, so the effect of dimension and dose variations are suppressed), the series choke keep it stable anyway.
For US market you need lamps helping to thermally stabilize the stuff, but you don't care as much for the lamp-to-lamp tolerance, so e.g. an unsaturated vapor concept works well, even when it is very sensitive to dosing and aging, the CWA cover that spread.