Skip to content

Conversation

@kernel-patches-bot
Copy link

Pull request for series with
subject: net: Initialize return value in gro_cells_receive
version: 1
url: https://patchwork.kernel.org/project/bpf/list/?series=360465

@kernel-patches-bot
Copy link
Author

Master branch: 67ed375
series: https://patchwork.kernel.org/project/bpf/list/?series=360465
version: 1

Pull request is NOT updated. Failed to apply https://patchwork.kernel.org/project/bpf/list/?series=360465
error message:

Cmd('git') failed due to: exit code(128) cmdline: git am -3 stdout: 'Applying: net: Initialize return value in gro_cells_receive Patch failed at 0001 net: Initialize return value in gro_cells_receive When you have resolved this problem, run "git am --continue". If you prefer to skip this patch, run "git am --skip" instead. To restore the original branch and stop patching, run "git am --abort".' stderr: 'error: corrupt patch at line 6 error: could not build fake ancestor hint: Use 'git am --show-current-patch' to see the failed patch' 

conflict:

 
@kernel-patches-bot kernel-patches-bot deleted the series/360465=>bpf-next branch October 7, 2020 01:45
kernel-patches-bot pushed a commit that referenced this pull request Nov 18, 2021
In this patch - 1) Add a new prog "for_each_helper" which tests the basic functionality of the bpf_for_each helper. 2) Add pyperf600_foreach and strobemeta_foreach to test the performance of using bpf_for_each instead of a for loop The results of pyperf600 and strobemeta are as follows: ~strobemeta~ Baseline verification time 6808200 usec stack depth 496 processed 592132 insns (limit 1000000) max_states_per_insn 14 total_states 16018 peak_states 13684 mark_read 3132 #188 verif_scale_strobemeta:OK (unrolled loop) Using bpf_for_each verification time 31589 usec stack depth 96+408 processed 1630 insns (limit 1000000) max_states_per_insn 4 total_states 107 peak_states 107 mark_read 60 #189 verif_scale_strobemeta_foreach:OK ~pyperf600~ Baseline verification time 29702486 usec stack depth 368 processed 626838 insns (limit 1000000) max_states_per_insn 7 total_states 30368 peak_states 30279 mark_read 748 #182 verif_scale_pyperf600:OK (unrolled loop) Using bpf_for_each verification time 148488 usec stack depth 320+40 processed 10518 insns (limit 1000000) max_states_per_insn 10 total_states 705 peak_states 517 mark_read 38 #183 verif_scale_pyperf600_foreach:OK Using the bpf_for_each helper led to approximately a 100% decrease in the verification time and in the number of instructions. Signed-off-by: Joanne Koong <joannekoong@fb.com>
kernel-patches-bot pushed a commit that referenced this pull request Nov 18, 2021
In this patch - 1) Add a new prog "for_each_helper" which tests the basic functionality of the bpf_for_each helper. 2) Add pyperf600_foreach and strobemeta_foreach to test the performance of using bpf_for_each instead of a for loop The results of pyperf600 and strobemeta are as follows: ~strobemeta~ Baseline verification time 6808200 usec stack depth 496 processed 592132 insns (limit 1000000) max_states_per_insn 14 total_states 16018 peak_states 13684 mark_read 3132 #188 verif_scale_strobemeta:OK (unrolled loop) Using bpf_for_each verification time 31589 usec stack depth 96+408 processed 1630 insns (limit 1000000) max_states_per_insn 4 total_states 107 peak_states 107 mark_read 60 #189 verif_scale_strobemeta_foreach:OK ~pyperf600~ Baseline verification time 29702486 usec stack depth 368 processed 626838 insns (limit 1000000) max_states_per_insn 7 total_states 30368 peak_states 30279 mark_read 748 #182 verif_scale_pyperf600:OK (unrolled loop) Using bpf_for_each verification time 148488 usec stack depth 320+40 processed 10518 insns (limit 1000000) max_states_per_insn 10 total_states 705 peak_states 517 mark_read 38 #183 verif_scale_pyperf600_foreach:OK Using the bpf_for_each helper led to approximately a 100% decrease in the verification time and in the number of instructions. Signed-off-by: Joanne Koong <joannekoong@fb.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment