Skip to content
ssvb edited this page Sep 24, 2014 · 3 revisions

ATM7029B?

tinymembench v0.3.9 (simple benchmark for memory throughput and latency) ========================================================================== == Memory bandwidth tests == == == == Note 1: 1MB = 1000000 bytes == == Note 2: Results for 'copy' tests show how many bytes can be == == copied per second (adding together read and writen == == bytes would have provided twice higher numbers) == == Note 3: 2-pass copy means that we are using a small temporary buffer == == to first fetch data into it, and only then write it to the == == destination (source -> L1 cache, L1 cache -> destination) == == Note 4: If sample standard deviation exceeds 0.1%, it is shown in == == brackets == ========================================================================== C copy backwards : 260.6 MB/s (0.4%) C copy : 323.8 MB/s (0.7%) C copy prefetched (32 bytes step) : 372.8 MB/s C copy prefetched (64 bytes step) : 374.4 MB/s (0.5%) C 2-pass copy : 292.0 MB/s (0.3%) C 2-pass copy prefetched (32 bytes step) : 325.6 MB/s C 2-pass copy prefetched (64 bytes step) : 325.5 MB/s C fill : 1445.6 MB/s (1.9%) --- standard memcpy : 332.5 MB/s (0.5%) standard memset : 1434.4 MB/s --- NEON read : 621.2 MB/s NEON read prefetched (32 bytes step) : 950.8 MB/s NEON read prefetched (64 bytes step) : 1069.2 MB/s NEON read 2 data streams : 761.7 MB/s NEON read 2 data streams prefetched (32 bytes step) : 1253.5 MB/s (0.2%) NEON read 2 data streams prefetched (64 bytes step) : 1020.1 MB/s NEON copy : 322.1 MB/s NEON copy prefetched (32 bytes step) : 392.4 MB/s (1.0%) NEON copy prefetched (64 bytes step) : 336.3 MB/s (0.1%) NEON unrolled copy : 329.7 MB/s NEON unrolled copy prefetched (32 bytes step) : 364.4 MB/s NEON unrolled copy prefetched (64 bytes step) : 346.2 MB/s NEON copy backwards : 280.7 MB/s (0.1%) NEON copy backwards prefetched (32 bytes step) : 380.8 MB/s (0.7%) NEON copy backwards prefetched (64 bytes step) : 321.4 MB/s NEON 2-pass copy : 298.4 MB/s NEON 2-pass copy prefetched (32 bytes step) : 353.6 MB/s NEON 2-pass copy prefetched (64 bytes step) : 324.5 MB/s NEON unrolled 2-pass copy : 293.4 MB/s NEON unrolled 2-pass copy prefetched (32 bytes step) : 333.5 MB/s (0.2%) NEON unrolled 2-pass copy prefetched (64 bytes step) : 322.4 MB/s (0.2%) NEON fill : 1435.4 MB/s NEON fill backwards : 1434.1 MB/s VFP copy : 327.8 MB/s VFP 2-pass copy : 297.6 MB/s ARM fill (STRD) : 1435.5 MB/s ARM fill (STM with 8 registers) : 1433.0 MB/s ARM fill (STM with 4 registers) : 1435.2 MB/s ARM copy prefetched (incr pld) : 378.2 MB/s ARM copy prefetched (wrap pld) : 335.5 MB/s ARM 2-pass copy prefetched (incr pld) : 343.4 MB/s ARM 2-pass copy prefetched (wrap pld) : 311.7 MB/s ========================================================================== == Memory latency test == == == == Average time is measured for random memory accesses in the buffers == == of different sizes. The larger is the buffer, the more significant == == are relative contributions of TLB, L1/L2 cache misses and SDRAM == == accesses. For extremely large buffer sizes we are expecting to see == == page table walk with several requests to SDRAM for almost every == == memory access (though 64MiB is not nearly large enough to experience == == this effect to its fullest). == == == == Note 1: All the numbers are representing extra time, which needs to == == be added to L1 cache latency. The cycle timings for L1 cache == == latency can be usually found in the processor documentation. == == Note 2: Dual random read means that we are simultaneously performing == == two independent memory accesses at a time. In the case if == == the memory subsystem can't handle multiple outstanding == == requests, dual random read has the same timings as two == == single reads performed one after another. == ========================================================================== block size : single random read / dual random read 1024 : 0.0 ns / 0.0 ns 2048 : 0.0 ns / 0.0 ns 4096 : 0.0 ns / 0.0 ns 8192 : 0.0 ns / 0.0 ns 16384 : 0.0 ns / 0.0 ns 32768 : 11.9 ns / 19.1 ns 65536 : 18.6 ns / 26.1 ns 131072 : 22.2 ns / 28.8 ns 262144 : 24.0 ns / 30.0 ns 524288 : 28.5 ns / 35.8 ns 1048576 : 89.8 ns / 147.0 ns 2097152 : 121.4 ns / 197.2 ns 4194304 : 139.9 ns / 224.1 ns 8388608 : 154.2 ns / 243.1 ns 16777216 : 165.9 ns / 259.2 ns 33554432 : 176.2 ns / 276.5 ns 67108864 : 189.5 ns / 303.5 ns 

Kernel 4.9.140-tegra #1 SMP PREEMPT Wed Mar 13 00:32:22 PDT 2019 aarch64 GNU/Linux Under xorg, no compositor active, no browser or other cpu hogs.

tinymembench v0.4.9 (simple benchmark for memory thr ========================================================================== == Memory bandwidth tests == == == == Note 1: 1MB = 1000000 bytes == == Note 2: Results for 'copy' tests show how many bytes can be == == copied per second (adding together read and writen == == bytes would have provided twice higher numbers) == == Note 3: 2-pass copy means that we are using a small temporary buffer == == to first fetch data into it, and only then write it to the == == destination (source -> L1 cache, L1 cache -> destination) == == Note 4: If sample standard deviation exceeds 0.1%, it is shown in == == brackets == ========================================================================== C copy backwards : 2949.7 MB/s (3.8%) C copy backwards (32 byte blocks) : 3011.8 MB/s C copy backwards (64 byte blocks) : 3029.2 MB/s C copy : 3642.2 MB/s (4.1%) C copy prefetched (32 bytes step) : 3824.4 MB/s (0.3%) C copy prefetched (64 bytes step) : 3825.3 MB/s (0.4%) C 2-pass copy : 2726.2 MB/s C 2-pass copy prefetched (32 bytes step) : 2902.6 MB/s (2.5%) C 2-pass copy prefetched (64 bytes step) : 2928.3 MB/s (0.3%) C fill : 8541.0 MB/s (0.2%) C fill (shuffle within 16 byte blocks) : 8518.5 MB/s (2.1%) C fill (shuffle within 32 byte blocks) : 8537.1 MB/s (0.1%) C fill (shuffle within 64 byte blocks) : 8528.7 MB/s (0.2%) --- standard memcpy : 3558.8 MB/s standard memset : 8520.2 MB/s --- NEON LDP/STP copy : 3633.9 MB/s (4.2%) NEON LDP/STP copy pldl2strm (32 bytes step) : 1451.0 MB/s (0.3%) NEON LDP/STP copy pldl2strm (64 bytes step) : 1450.9 MB/s (0.5%) NEON LDP/STP copy pldl1keep (32 bytes step) : 3882.5 MB/s (3.9%) NEON LDP/STP copy pldl1keep (64 bytes step) : 3884.0 MB/s (0.4%) NEON LD1/ST1 copy : 3630.8 MB/s (0.3%) NEON STP fill : 8537.8 MB/s NEON STNP fill : 8544.9 MB/s (1.7%) ARM LDP/STP copy : 3635.8 MB/s (0.3%) ARM STP fill : 8544.8 MB/s (0.1%) ARM STNP fill : 8549.2 MB/s (1.0%) ========================================================================== == Framebuffer read tests. == == == == Many ARM devices use a part of the system memory as the framebuffer, == == typically mapped as uncached but with write-combining enabled. == == Writes to such framebuffers are quite fast, but reads are much == == slower and very sensitive to the alignment and the selection of == == CPU instructions which are used for accessing memory. == == == == Many x86 systems allocate the framebuffer in the GPU memory, == == accessible for the CPU via a relatively slow PCI-E bus. Moreover, == == PCI-E is asymmetric and handles reads a lot worse than writes. == == == == If uncached framebuffer reads are reasonably fast (at least 100 MB/s == == or preferably >300 MB/s), then using the shadow framebuffer layer == == is not necessary in Xorg DDX drivers, resulting in a nice overall == == performance improvement. For example, the xf86-video-fbturbo DDX == == uses this trick. == ========================================================================== NEON LDP/STP copy (from framebuffer) : 766.0 MB/s NEON LDP/STP 2-pass copy (from framebuffer) : 688.8 MB/s NEON LD1/ST1 copy (from framebuffer) : 770.6 MB/s (0.1%) NEON LD1/ST1 2-pass copy (from framebuffer) : 681.3 MB/s (0.3%) ARM LDP/STP copy (from framebuffer) : 766.1 MB/s ARM LDP/STP 2-pass copy (from framebuffer) : 689.1 MB/s ========================================================================== == Memory latency test == == == == Average time is measured for random memory accesses in the buffers == == of different sizes. The larger is the buffer, the more significant == == are relative contributions of TLB, L1/L2 cache misses and SDRAM == == accesses. For extremely large buffer sizes we are expecting to see == == page table walk with several requests to SDRAM for almost every == == memory access (though 64MiB is not nearly large enough to experience == == this effect to its fullest). == == == == Note 1: All the numbers are representing extra time, which needs to == == be added to L1 cache latency. The cycle timings for L1 cache == == latency can be usually found in the processor documentation. == == Note 2: Dual random read means that we are simultaneously performing == == two independent memory accesses at a time. In the case if == == the memory subsystem can't handle multiple outstanding == == requests, dual random read has the same timings as two == == single reads performed one after another. == ========================================================================== block size : single random read / dual random read, [MADV_NOHUGEPAGE] 1024 : 0.0 ns / 0.1 ns 2048 : 0.0 ns / 0.1 ns 4096 : 0.0 ns / 0.1 ns 8192 : 0.0 ns / 0.1 ns 16384 : 0.1 ns / 0.1 ns 32768 : 1.7 ns / 2.9 ns 65536 : 6.4 ns / 9.5 ns 131072 : 9.6 ns / 12.3 ns 262144 : 13.7 ns / 17.0 ns 524288 : 15.8 ns / 19.7 ns 1048576 : 17.3 ns / 22.1 ns 2097152 : 42.1 ns / 64.2 ns 4194304 : 98.5 ns / 138.1 ns 8388608 : 143.9 ns / 186.3 ns 16777216 : 167.2 ns / 211.2 ns 33554432 : 180.1 ns / 227.1 ns 67108864 : 200.0 ns / 260.2 ns block size : single random read / dual random read, [MADV_HUGEPAGE] 1024 : 0.0 ns / 0.0 ns 2048 : 0.0 ns / 0.0 ns 4096 : 0.0 ns / 0.0 ns 8192 : 0.0 ns / 0.0 ns 16384 : 0.0 ns / 0.0 ns 32768 : 0.0 ns / 0.0 ns 65536 : 6.4 ns / 9.4 ns 131072 : 9.5 ns / 12.2 ns 262144 : 11.2 ns / 13.1 ns 524288 : 12.1 ns / 13.5 ns 1048576 : 12.8 ns / 13.6 ns 2097152 : 27.0 ns / 33.0 ns 4194304 : 90.6 ns / 127.8 ns 8388608 : 123.9 ns / 153.8 ns 16777216 : 139.5 ns / 161.2 ns 33554432 : 147.2 ns / 163.6 ns 67108864 : 154.0 ns / 167.6 ns 

Clone this wiki locally