However, here the theory ends. LED flicker can be squarewave (vs. more gradual in discharge) and the duty cycle of the flicker can be much shorter "on" periods (vs. discharge). Flicker at 100/120Hz is borderline between visible or not. It is exactly this detail (and for Fluorescetnts, the afterglow of the Phosphors) that makes the difference...
ley fill" or so) and low voltage Halogen lighting (the thick filament have high thermal inertia)
The theory does not end there at all. The literature describes pretty accurately, how the border frequency (when the flicker becomes below certain perception level) depends on the shape of the light output pulse. And it lists the acceptable flicker level for given applications (it includes the way, how the light is gonna to reach the eye - peripheral vision is way more flicker sensitive than the central vision part, so accent light has higher flicker tolerance than base light background, driveway security light has higher acceptable flicker than indoor lighting,...).
The problem is, many LEDs are designed according to the most flicker tolerant applications (low quality lighting), but offered for rather high demand applications.
This belongs to the general problem of wrong specification of the exact used light source.
The technical problem is, you may have flicker free lamp within a tight enclosure (the filament concept), but then the ballast life will be rather limited (the highly loaded capacitor will fail first, most likely causing short circuit; but for high light quality application it still may be acceptable).
You may as well design the ballast to really eliminate all life limiting components (so mainly the electrolytes; so the overall life would be really limited by the LED alone, which is unavoidable), the money saved on those you may put into using more LED chips and drive them at lower power so prolong their life, but then you have to live with some flicker.