For example:
if(5.0f > 1) { } Does it have a significant performance penalty? Compared to just doing
if(5.0f > 1.0f) { } In the examples you provided there is most probably no difference, since all the necessary conversions will most probably be performed at compile time.
The following will incur a slight performance penalty:
if( floatVariable > intVariable )
The penalty will be of the order of an additional clockcycle. (Maybe two? three clock cycles at most? that's still of the same order.)
Is that significant? It depends on what kind of stuff you are doing. If you are working on some extremely computationally intensive processing software, like weather simulation models for meteorology, then it will be significant. Barely. If you are doing anything less intensive than that, it is a joke to be even wasting any time thinking about it.
Actually, if you fill your software with conversions between float and int, and the software you are writing is a run of the mill application, and it becomes published and it gets installed on a couple of thousand computers, and people use it every day and keep it installed for an average of one year, the time you have already wasted writing this question and reading this answer is probably more than the total amount of time that will be wasted by all of those users together, over the entire year.