When the emission layer gets consumed, the cathode work force increases, so needs ions to hit it faster in order to eject the electrons for the discharge, hence larger voltage drop gets spent to accelerate them (cathode drop). The first consequence is, the energy spent in accelerating these ions does not radiate, so it means just extra losses (that is the main reason for the use of hot cathodes, which emits electrons freely without needing any ion acceleration, so with minimum voltage drop so power not generating the desired radiation).
And here the second consequence: The energy of the fast ions has to go somewhere and that "somewhere" is the heat heating up the cathodes.
The low cathode losses of a strong electron emitting hot cathode is the main reason why it is employed in the first place, to eliminate the wasted power on the cathode.
Normally a balance gets established where the heated cathode emits so much electrons, it reduces the cathode drop and so the power generating the heat for the cathode. If the cathode gets colder, the cathode drop increase, insreasing the power that is heating it, recovering the temperature back. But all this is designed to work at reasonable temperatures only if the emission material is present on the tungsten filament, it draws all of the emission current on itself, so keeping all other parts (e.g. the lead wires) reasonably cool. But once the emission layer gets consumed, the ions first warm up the filament itself to way higher temperature to get the emission from it. The filament itself is the first "victim", because it is thin, so the easiest to get up to a temperature where it starts emitting the electrons. But for bare tungsten it means so high temperatures it gets evaporated rather quickly. Then the lead wires take over, they do not need that high temperature, but they can not stand even that lower temperature and melt, so moving the arc root (the point where most of the emission for the arc happens) practically into the glass. And once there, the heat is so intense the glass has no chance to withstand it and it cracks.
The only way to prevent the glass from cracking is to shut the lamp down once the emission layer gets eaten up, so it will need a well working and very sensitive EOL protection in the ballast. And that is kinda of problem: If you make it so sensitive, it will start to trip prematurely just due to fluctuation of the lamp characteristics (sometimes the emission layer need a bit of extra heat "push" to recover its emission and this may get misinterpreted by the EOL detection as an EOL point). Because after emission loss the lamp is practically unusable anyway (any significant current will lead to the things melting), the EOL detections are designed only to maintain the thing safe (so the lamp won't shatter and "rainfall" the mess down) and if the ballast is supposed to overlast many lamp, to protect the ballast and fixture sockets (so to not overheat that area to damage them). In CFLs the ballast is a "throw away" once the lamp goes away, so practically just the safety aspect remains. And that in most cases relies on the lamp electrodes destroying themself so the thing shuts down before it disintegrates.
But nothing of that means the lamp can not let the air in.
The thing is once the discharge transfers onto the lead wire, the cathode dro from them is not that high even when it means the wires get melted at the arc root, so it is quite hard to detect without being too prone to false trips. Even the filament continuity detection is not reliable, as the ionized gas can form strong enough connection between the lead in wires, so the detection circuitry still "thinks" the filament is intact, so does not shut down the lamp.
So the only way to operate a lamp with consumed emission layer on one electrode is when the other one has still some and so can be used in DC operation as the working cathode. Once the emission layer is consumed on both sides, you are limited to just few mA (a small CCFL driver or NST), in order to prevent the lead wire further destruction.
|