I doubt a canonical answer to this question exists, but just to have my say.
After an interesting hour or so reading, the conclusion has to be that these specific benchmarks don't do much apart from tell you how fast your hardware can execute these specific routines on the target machine's FPU and CPU respectively.
Just take a look around, two interesting resources I found are the wiki and a manufacturer. A moment's thought over these two pretty clearly shows that the manufacturer has chosen specific benchmarks (some the same as HardInfo and some different) that show their product in it's best light. They even tell you what they intend to measure by them ..... CPU and FPU speed, no more, no less. Don't lose sleep looking for more than that.
So there is part of the answer. The CPU benchmarks measure the processor speed in slightly different ways the only significance being that different processors will perform differently in the various tests. Ditto the FPU ones. This will be true of any benchmark routine that tests a system more complex than a few components.
Better or worse? General guidance is found over at tombuntu.
The thing I am carrying away from this bout of reading is from the wiki:
Manufacturers commonly report only those benchmarks (or aspects of benchmarks) that show their products in the best light. They also have been known to mis-represent the significance of benchmarks, again to show their products in the best possible light. Taken together, these practices are called bench-marketing.
So what you have in Hardinfo is just a set of standardised tests that haven't been specially selected by the manufacturer. And as for "day-to-day" computing .... again from the wiki
Ideally benchmarks should only substitute for real applications if the application is unavailable, or too difficult or costly to port to a specific processor or computer system. If performance is critical, the only benchmark that matters is the target environment's application suite.
Conclusion? In general terms better benchmarking means faster processing, but is not a guarantee that a system performing well under benchmarking will not be outperformed in IRL by a 'lesser' system more suited to its workload.
Reading through the various articles on HardInfo, it also seems the real intent is to monitor your system performance and degradation. This is the only circumstance under which the benchmarks can actually be reliably interpreted, when run on the same hardware etc.