It might be easier to consider the DC case. Imagine a small lithium coin cell that has an internal resistance of 10 Ω. A digital voltmeter with very high impedance, 100 MΩ, approximately an "open circuit", might read 3.000 VDC. Connect a 10 Ω resistor across the cell, and measure the voltage again -- half of the voltage would be dropped across the cell's internal resistance, and the other half across the 10 Ω resistor. This is, in effect, a voltage divider, and the voltage measured across the resistor is only 1.500 VDC. 150 mA flows through the circuit, with 225 mW dissipated in the resistor, and another 225 mW lost heating the coin cell.
With a 1 Ω resistor, almost all the energy goes to heat the cell, and the voltage across the resistor is far lower than above, so output power is less than the example above. With a 1,000 Ω resistor, the output voltage increases, and less power is lost in the cell, but the current is reduced by a factor of approximately 100, so, again, output power is less than the example above.
If you graph power out vs. load resistance, the maximum is at 10 Ω, 225 mW in the resistor. The same applies for RF or pulse generators. The maximum power output is with an impedance equal to that of the generator's internal impedance. The maximum voltage would be across an infinite impedance (though for AC, a mismatched output impedance can create higher standing-wave voltage). The maximum current would be across a short circuit, i.e., 0 Ω, represented by the 1 Ω resistor, since one cannot measure voltage across a short circuit (again, with the caveat one could measure it inductively, for AC).