@Medved, each LED has its own driver/ballast, there are four of them in each fixture. I will be installing these lights today, after testing I found they are much closer to a 400w MH, maybe even less light than that. After running for an hour or so, the back of the fixture gets VERY hot, to the point where you cannot touch it for more than a couple seconds.
You CAN touch it for second or two means, it is not hotter than about 70degC. And you can not hold it longer, that means above 50degC, so pretty were expected...
Kill-a-watt reads 207w for the entire fixture when first started cold, after an hour of operation that drops to 173w. Why is that?
The ballasts are a constant current sources, so maintain the constant 1.5A current, whatever the actual voltage does. It is electronic controlled and that uses to be really constant.
That means the actual power becomes proportional with the voltage: 25V would lead to 37.5W, 36V would lead to 54W.
Now the LED's are semiconductor diodes in the first place, so their electrical behaviour conform to equations the diodes use to follow, especially the forward voltage having negative temperature coefficient. With silicon this uses to be about 2mV/K, with blue LED's it uses to be about 5mV/K (if I remember well). There are 10 LED's in series, so it means lets say 50mV/K of temperature difference.
Now we have calculated the operating temperature to be around 65degC, what means 40degC above the room temperature on the LED base. I would expect about half of that internally, so total 60degC difference.
So as you conect the power, by heating up the voltage will drop by 60degC*50mV/degC=3V. So when it started at 33V, so 50W, after heating up it ended at about 30V, so 180W.
It does explain at least part of the difference.
The other part may come from the ballast:
To prevent the device from overheating, many LED ballast controllers do contain a feature called "Thermal fold back". What is does: It senses the temperature and when it approaches set limit, it starts to reduce the ballast power, so the generated heat.
The aim is to provide reliable thermal protection, while avoiding complete cut offs (as the classic SMPS thermal shut down does), so it maintain the device operating.
Now as the ballast is operated above it's ambient temperature rating, it may be it is exceeding the 70degC surface temperature (actually the limiting factor would be about 130degC on the ballast IC chip) and so the power reduction kicks in.
Or other reason may be there: The ballast design is made cheap and so instead of proper reference (as part of a dedicated LED ballast control IC they may use some generic DCDC converter chip with simple current control feedback circuit), they use just a Vbe threshold voltage to regulate the current (let a simple NPN to sense the voltage drop across the current sense shunt resistor). And that has the habit of a negative thermal coefficient as well (the famous -2.1mV/K), transferring that to the power decreasing with the temperature increase (about 0.37%/degC, what means about 5/6 of the current when the ballast temperature rises by 50degC; pretty well match your observation). Actually I would see nothing wrong with that approach either, juyst the light output would vary with temperature a bit more than the real constant current and thermal fold back.
For the heat transfer compound: Whatever the color is, the modern ones tend to act as glue, so once you snap the thing off, you have no other choice than clean the surfaces and apply new compound.
The silicon grease alone is not good for this, it has quite high thermal resistance (it works only on really accurate flat surfaces in very thin layers, but here the heatsink surface is very raw). Thermally good are the metal filled compounds (usually sold in computer shops), but those are electrically conductive. Second best are the Zinc oxide based ones, again sometimes sold as accessories for CPU heatsinks or so.