I'm answering your revised posting of June 19. First, you don't really grok how LEDs work (but don't worry, that's quite common).
Consider what you already know about electricity. Imagine you need to charge five smart-phones. 
Of if we're designing a regulating circuit for five smart-phones:

No need for an explanation here. They are constant-voltage devices which we power with a constant-voltage power supply: voltage remains constant in a parallel connection. (a series connection would be laughably, stupidly, "not even wrong".) Not that we ever really think about that; this is just normal.
And we want to think LED emitters are like that too. But they're not.
Consider a fluorescent tube which runs at ~120V at ~300ma. Like any arc-discharge light, once the arc strikes, its resistance is nearly zero. The bulb wants to be a dead-short, which is why you need a ballast to limit current. The ballast is a constant-current supply, which is not something we're used to.
LED emitters are very non-linear - a tiny change in voltage, temperature or age causes a huge change in current. To drive them constant-voltage, you'd need to choose such a conservative voltage that you wouldn't get much performance out of the LED. A well-chosen resistor is better, but you still need a big safety margin which holds you back from peak performance. On the other hand, the LED is rated by the factory to be driven at a specific current.
You say you have an LED that is 350ma at 3.6VDC. You interpreted that as about 350ma at exactly 3.6V. Nope, it's the other way around: that unit is specced for exactly 350ma at about 3.6V. If you tried driving it at 3.6V, you might get 50ma, 150ma, 350ma... or magic smoke. And that would change significantly as the LED warms up.
Constant-current turns our way-of-thinking upside down. When regulating current instead of voltage, you want series instead of parallel.

Or if we're designing a regulating circuit for five LEDs:

They are constant-current devices which we power with a constant-current power supply: current remains constant in a series connection. (a parallel connection would be laughably, stupidly, "not even wrong"... well, not quite as wrong as the earlier example, but you'd really be praying to the manufacturing-tolerance gods.)
If you want to dim the LED, you can change the current. The voltage won't change much, so reducing the current by half reduces the brightness by about half. Since they're in series, they all dim together, and equally. And since we certainly could change the current, it might be more apt to call this arrangement "current mode", since we care about current, and not about voltage.
As a practical thing, constant-current regulator modules are readily available. Lot of people make them. And 350ma is a commonly used current. We care about voltage a little; a driver intended for ~3.6V LEDs may be a different product than one intended for a ~36V string of 10 in series.
Or if you need really simple, stupid current regulation, indeed, a resistor will suffice. In fact, you only need one resistor. But as discussed, you won't get peak performance out of the LEDs that way.
If multiple LEDs stack to more voltage than you have readily available, one option is to break them up into series-parallel strings, with current regulation on each string. That's what happens in 12 volt "LED strips". The other is to use a boost converter to pump your voltage. Since constant-current power supplies usually incorporate chokes and choppers, the "boost" and "current limiting" features could be combined - the Joule Thief is a very simple example of that.