zsmalloc: add Kconfig for enabling page table method

Zsmalloc has two methods 1) copy-based and 2) pte based to
access objects that span two pages.
You can see history why we supported two approach from [1].

But it was bad choice that adding hard coding to select arch
which want to use pte based method because there are lots of
SoC in an architecure and they can have different cache size,
CPU speed and so on so it would be better to expose it to user
as selectable Kconfig option like Andrew Morton suggested.

[1] https://lkml.org/lkml/2012/7/11/58

Acked-by: Nitin Gupta <ngupta@vflare.org>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Minchan Kim <minchan@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This commit is contained in:
Minchan Kim 2013-12-11 11:04:36 +09:00 коммит произвёл Greg Kroah-Hartman
Родитель 99ec297ad6
Коммит 1b945aeef0
2 изменённых файлов: 17 добавлений и 15 удалений

Просмотреть файл

@ -9,3 +9,16 @@ config ZSMALLOC
non-standard allocator interface where a handle, not a pointer, is
returned by an alloc(). This handle must be mapped in order to
access the allocated space.
config PGTABLE_MAPPING
bool "Use page table mapping to access object in zsmalloc"
depends on ZSMALLOC
help
By default, zsmalloc uses a copy-based object mapping method to
access allocations that span two pages. However, if a particular
architecture (ex, ARM) performs VM mapping faster than copying,
then you should select this. This causes zsmalloc to use page table
mapping rather than copying for object mapping.
You can check speed with zsmalloc benchmark[1].
[1] https://github.com/spartacus06/zsmalloc

Просмотреть файл

@ -218,19 +218,8 @@ struct zs_pool {
#define CLASS_IDX_MASK ((1 << CLASS_IDX_BITS) - 1)
#define FULLNESS_MASK ((1 << FULLNESS_BITS) - 1)
/*
* By default, zsmalloc uses a copy-based object mapping method to access
* allocations that span two pages. However, if a particular architecture
* performs VM mapping faster than copying, then it should be added here
* so that USE_PGTABLE_MAPPING is defined. This causes zsmalloc to use
* page table mapping rather than copying for object mapping.
*/
#if defined(CONFIG_ARM) && !defined(MODULE)
#define USE_PGTABLE_MAPPING
#endif
struct mapping_area {
#ifdef USE_PGTABLE_MAPPING
#ifdef CONFIG_PGTABLE_MAPPING
struct vm_struct *vm; /* vm area for mapping object that span pages */
#else
char *vm_buf; /* copy buffer for objects that span pages */
@ -631,7 +620,7 @@ static struct page *find_get_zspage(struct size_class *class)
return page;
}
#ifdef USE_PGTABLE_MAPPING
#ifdef CONFIG_PGTABLE_MAPPING
static inline int __zs_cpu_up(struct mapping_area *area)
{
/*
@ -669,7 +658,7 @@ static inline void __zs_unmap_object(struct mapping_area *area,
unmap_kernel_range(addr, PAGE_SIZE * 2);
}
#else /* USE_PGTABLE_MAPPING */
#else /* CONFIG_PGTABLE_MAPPING */
static inline int __zs_cpu_up(struct mapping_area *area)
{
@ -747,7 +736,7 @@ out:
pagefault_enable();
}
#endif /* USE_PGTABLE_MAPPING */
#endif /* CONFIG_PGTABLE_MAPPING */
static int zs_cpu_notifier(struct notifier_block *nb, unsigned long action,
void *pcpu)