I'm an avid DIY'er recycling some LEDs from a TV. It uses 50 LEDs in series, supplied by 160 V, and the PSU says 160 V @ 400 mA. So my target is to provide 3.2 V @ 400 mA in my own circuit. I would like to drive them with my 12 V constant voltage power supply, which is adjustable from 10 V up to 14 V.
I understand how to calculate resistor size for a given series of LEDs. Tons of articles explain that. I also understand that LEDs have an aggressive I-V curve, so a series resistor acting as a current limiter is highly recommended (required depending on who you ask). What I don't understand is what happens to the circuit upon voltage fluctuations.
Take threea couple sample circuit designs for my case:
- I could wire 3 LEDs per series, provide 12.08 volts, with a 6.2 ohm resistor. That's only 79% efficient, requires a 2 watt resistor, and generates a good bit of heat.
- I could wire 4 LEDs per series, provide 13.2 volts, with a 1 ohm resistor. That's 97% efficient, cheaper, easier, and runs cooler.
A couple of pre-emptive clarifications...
I don't know the specifications of the LEDs in question. They are from a Sharp LC-55Q7030U, and I can't find details on what the manufacturer used.
And I could use the power supply from the TV, which appears to be a much higher quality current-limited source, but then I have to figure out how to trigger power-on, control its PWM circuit, and then figure out how to do the same for the next 2 different TVs I want to control with the same switch & dimmer. Doing some soldering feels simpler to my under-educated brain.
Question:
How do I calculate the effect of supply voltage fluctuations on each of those (or infinite alternative) circuit design options? How much protection does the nice and efficient 1 ohm resistor actually provide? Would that really be any better than just under-supplying 12.4 V to 4 LEDs without a resistor (which still provides ample brightness)?