4

The program hardinfo is included by default with Lubuntu as a system profiler. It can run 6 different types of benchmarks:

-CPU Blowfish

-CPU Cryptohash

-CPU Fibonacci

-CPU N-Queens

-FPU FFT

-FPU Raytracing

I recognize most of these as mathematical problems that require computation to solve, but I was wondering if anyone could explain how each individual test relates to the ability of the processor to run tasks? Ie, if I see one test is faster on one machine than the other, but a different benchmark has a relatively small improvement, what does that tell me about the hardware in question?

3
  • Here you will find all the details related to HardInfo. Commented Nov 19, 2019 at 12:23
  • Grace period note: The bounty has failed to look for a good answer, unfortunately. We have one new answer that is quite generic and barely relevant to HardInfo. Nobody had voted, so bounty will be lost automatically. Commented Nov 22, 2019 at 4:01
  • There are several ways to answer this question: one is to directly examine the source code of HardInfo; another way is to come up with a review-style answer that explains alongside other benchmarks added in later versions of HardInfo. Unlike many benchmarking programs, HardInfo is open source and some useful comment may be found within the source code (inline documentation). Commented Nov 22, 2019 at 4:10

3 Answers 3

3

Blowfish is a symmetric-key 64-bit block cipher.

CryptoHash is a cryptographic hash function that maps data of arbitrary size (often called the "message") to a bit array of a fixed size (called the "hash" or "message digest"). It is a one-way function, that is practically infeasible to invert, and is used in digital signatures, message authentication and hash functions to index data in hash tables.

A Fibonacci sequence is a series of numbers in which each number is the sum of the two preceding numbers, such as 1, 1, 2, 3, 5, 8, etc. This benchmark tests the integer processing ability of a CPU.

N-Queens finds a way to place a variable number of queens on a chessboard so that no two queens threaten each other by sharing the same row, column or diagonal. For some reason, the Cortex-A53 which is a simpler in-order processor does better at this benchmark than more complex out-of-order processors like the Cortex-A72 and Core i5, but it is hard to conclude much from this result.

ZLib is a software library used for data compression, which is used by the gzip file compression program. This benchmark is memory intensive, so its results will reflect the speed of the RAM.

Fast Fourier Transforms (FFT) converts a signal to frequencies and vice-versa. It is used in audio digital signal processing and image signal processing, and is an indication how fast a processor can process video in software, if the processor doesn't have hardware video encoding.

Ray tracing is a rendering technique for generating an image by tracing the path of light as pixels in an image plane and simulating the effects of its encounters with virtual objects. Like FFT, this benchmark tests how well the processor deals with floating point numbers (i.e. numbers with decimal points).

See: https://source.puri.sm/Librem5/community-wiki/-/wikis/Benchmarks

0

I doubt a canonical answer to this question exists, but just to have my say.

After an interesting hour or so reading, the conclusion has to be that these specific benchmarks don't do much apart from tell you how fast your hardware can execute these specific routines on the target machine's FPU and CPU respectively.

Just take a look around, two interesting resources I found are the wiki and a manufacturer. A moment's thought over these two pretty clearly shows that the manufacturer has chosen specific benchmarks (some the same as HardInfo and some different) that show their product in it's best light. They even tell you what they intend to measure by them ..... CPU and FPU speed, no more, no less. Don't lose sleep looking for more than that.

So there is part of the answer. The CPU benchmarks measure the processor speed in slightly different ways the only significance being that different processors will perform differently in the various tests. Ditto the FPU ones. This will be true of any benchmark routine that tests a system more complex than a few components.

Better or worse? General guidance is found over at tombuntu.

The thing I am carrying away from this bout of reading is from the wiki:

Manufacturers commonly report only those benchmarks (or aspects of benchmarks) that show their products in the best light. They also have been known to mis-represent the significance of benchmarks, again to show their products in the best possible light. Taken together, these practices are called bench-marketing.

So what you have in Hardinfo is just a set of standardised tests that haven't been specially selected by the manufacturer. And as for "day-to-day" computing .... again from the wiki

Ideally benchmarks should only substitute for real applications if the application is unavailable, or too difficult or costly to port to a specific processor or computer system. If performance is critical, the only benchmark that matters is the target environment's application suite.

Conclusion? In general terms better benchmarking means faster processing, but is not a guarantee that a system performing well under benchmarking will not be outperformed in IRL by a 'lesser' system more suited to its workload.

Reading through the various articles on HardInfo, it also seems the real intent is to monitor your system performance and degradation. This is the only circumstance under which the benchmarks can actually be reliably interpreted, when run on the same hardware etc.

0

I arrived here wondering why my Ryzen 5950X was sometimes showing at the top of the benchmarks (the ones with various generic hardware used as comparison within hardinfo) and sometimes at the bottom, and if by any chance some benchmarks are by time (lower is better) rather than speed (higher is better).

The link from @champion-runner indicates that, indeed, some metrics go in one direction and others in the other (and, frankly, I find that quite annoying). However, I believe that the information on the linked page is stale.

Looking into the source code (for which the trickiest part was to find the right version), I see that the authors actually added some highlight note / comments, but I cannot see it in the GUI (bug?). In any case, here are the comments for each benchmark:

better is... benchmark comment notes
lower CPU Blowfish "Results in seconds. Lower is better."
higher CPU CryptoHash "Results in MiB/second. Higher is better."
lower CPU Fibonacci "Results in seconds. Lower is better."
lower CPU N-Queens "Results in seconds. Lower is better."
??? CPU ZLib "Results in HIMarks. Higher is better." Contradicts my observations
lower FPU FFT "Results in seconds. Lower is better."
higher GPU Drawing "Results in HIMarks. Higher is better."
lower GPU Raytracing "Results in seconds. Lower is better."

The ordering for the "CPU ZLib" benchmark seems wrong: according to the code comment (and the source code: zlib.c#L64), it should be higher == better. Instead, when I load the system with other tasks to make the benchmark worse, the result is higher (e.g. from 5.4 to 6.0), and it seems what is reported is elapsed time in seconds. Despite the sleuthing below, is it possible that I have the wrong source version afterall, or that Debian/Ubuntu maintainers have done some patching that I missed?

References

(Bit of detective work - perhaps there is a faster way to get the proper repo?)

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.