bpf: Initialize same number of free nodes for each pcpu_freelist

pcpu_freelist_populate() initializes nr_elems / num_possible_cpus() + 1
free nodes for some CPUs, and then possibly one CPU with fewer nodes,
followed by remaining cpus with 0 nodes. For example, when nr_elems == 256
and num_possible_cpus() == 32, CPU 0~27 each gets 9 free nodes, CPU 28 gets
4 free nodes, CPU 29~31 get 0 free nodes, while in fact each CPU should get
8 nodes equally.

This patch initializes nr_elems / num_possible_cpus() free nodes for each
CPU firstly, then allocates the remaining free nodes by one for each CPU
until no free nodes left.

Fixes: e19494edab ("bpf: introduce percpu_freelist")
Signed-off-by: Xu Kuohai <xukuohai@huawei.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20221110122128.105214-1-xukuohai@huawei.com
This commit is contained in:
Xu Kuohai 2022-11-10 07:21:28 -05:00 коммит произвёл Andrii Nakryiko
Родитель c20572600e
Коммит 4b45cd81f7
1 изменённых файлов: 11 добавлений и 12 удалений

Просмотреть файл

@ -100,22 +100,21 @@ void pcpu_freelist_populate(struct pcpu_freelist *s, void *buf, u32 elem_size,
u32 nr_elems)
{
struct pcpu_freelist_head *head;
int i, cpu, pcpu_entries;
unsigned int cpu, cpu_idx, i, j, n, m;
pcpu_entries = nr_elems / num_possible_cpus() + 1;
i = 0;
n = nr_elems / num_possible_cpus();
m = nr_elems % num_possible_cpus();
cpu_idx = 0;
for_each_possible_cpu(cpu) {
again:
head = per_cpu_ptr(s->freelist, cpu);
/* No locking required as this is not visible yet. */
pcpu_freelist_push_node(head, buf);
i++;
buf += elem_size;
if (i == nr_elems)
break;
if (i % pcpu_entries)
goto again;
j = n + (cpu_idx < m ? 1 : 0);
for (i = 0; i < j; i++) {
/* No locking required as this is not visible yet. */
pcpu_freelist_push_node(head, buf);
buf += elem_size;
}
cpu_idx++;
}
}