With modern NIC, it is not unusual having about ~256,000 active dma
mappings and a hash size of 1024 buckets is too small.

Forcing full cache line per bucket does not seem useful, especially now
that we have contention on free_entries_lock for allocations and freeing
of entries.  Better use the space to fit more buckets.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
This commit is contained in:
Eric Dumazet 2019-10-30 11:48:44 -07:00 коммит произвёл Christoph Hellwig
Родитель d3694f3073
Коммит 5e76f56457
1 изменённых файлов: 2 добавлений и 2 удалений

Просмотреть файл

@ -27,7 +27,7 @@
#include <asm/sections.h>
#define HASH_SIZE 1024ULL
#define HASH_SIZE 16384ULL
#define HASH_FN_SHIFT 13
#define HASH_FN_MASK (HASH_SIZE - 1)
@ -87,7 +87,7 @@ typedef bool (*match_fn)(struct dma_debug_entry *, struct dma_debug_entry *);
struct hash_bucket {
struct list_head list;
spinlock_t lock;
} ____cacheline_aligned_in_smp;
};
/* Hash list to save the allocated dma addresses */
static struct hash_bucket dma_entry_hash[HASH_SIZE];