arm64: kaslr: support randomized module area with KASAN_VMALLOC

After KASAN_VMALLOC works in arm64, we can randomize module region
into vmalloc area now.

Test:
	VMALLOC area ffffffc010000000 fffffffdf0000000

	before the patch:
		module_alloc_base/end ffffffc008b80000 ffffffc010000000
	after the patch:
		module_alloc_base/end ffffffdcf4bed000 ffffffc010000000

	And the function that insmod some modules is fine.

Suggested-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
Link: https://lore.kernel.org/r/20210324040522.15548-5-lecopzer.chen@mediatek.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This commit is contained in:
Lecopzer Chen 2021-03-24 12:05:21 +08:00 коммит произвёл Catalin Marinas
Родитель 71b613fc0c
Коммит 31d02e7ab0
2 изменённых файлов: 19 добавлений и 15 удалений

Просмотреть файл

@ -128,15 +128,17 @@ u64 __init kaslr_early_init(void)
/* use the top 16 bits to randomize the linear region */ /* use the top 16 bits to randomize the linear region */
memstart_offset_seed = seed >> 48; memstart_offset_seed = seed >> 48;
if (IS_ENABLED(CONFIG_KASAN_GENERIC) || if (!IS_ENABLED(CONFIG_KASAN_VMALLOC) &&
IS_ENABLED(CONFIG_KASAN_SW_TAGS)) (IS_ENABLED(CONFIG_KASAN_GENERIC) ||
IS_ENABLED(CONFIG_KASAN_SW_TAGS)))
/* /*
* KASAN does not expect the module region to intersect the * KASAN without KASAN_VMALLOC does not expect the module region
* vmalloc region, since shadow memory is allocated for each * to intersect the vmalloc region, since shadow memory is
* module at load time, whereas the vmalloc region is shadowed * allocated for each module at load time, whereas the vmalloc
* by KASAN zero pages. So keep modules out of the vmalloc * region is shadowed by KASAN zero pages. So keep modules
* region if KASAN is enabled, and put the kernel well within * out of the vmalloc region if KASAN is enabled without
* 4 GB of the module region. * KASAN_VMALLOC, and put the kernel well within 4 GB of the
* module region.
*/ */
return offset % SZ_2G; return offset % SZ_2G;

Просмотреть файл

@ -40,14 +40,16 @@ void *module_alloc(unsigned long size)
NUMA_NO_NODE, __builtin_return_address(0)); NUMA_NO_NODE, __builtin_return_address(0));
if (!p && IS_ENABLED(CONFIG_ARM64_MODULE_PLTS) && if (!p && IS_ENABLED(CONFIG_ARM64_MODULE_PLTS) &&
!IS_ENABLED(CONFIG_KASAN_GENERIC) && (IS_ENABLED(CONFIG_KASAN_VMALLOC) ||
!IS_ENABLED(CONFIG_KASAN_SW_TAGS)) (!IS_ENABLED(CONFIG_KASAN_GENERIC) &&
!IS_ENABLED(CONFIG_KASAN_SW_TAGS))))
/* /*
* KASAN can only deal with module allocations being served * KASAN without KASAN_VMALLOC can only deal with module
* from the reserved module region, since the remainder of * allocations being served from the reserved module region,
* the vmalloc region is already backed by zero shadow pages, * since the remainder of the vmalloc region is already
* and punching holes into it is non-trivial. Since the module * backed by zero shadow pages, and punching holes into it
* region is not randomized when KASAN is enabled, it is even * is non-trivial. Since the module region is not randomized
* when KASAN is enabled without KASAN_VMALLOC, it is even
* less likely that the module region gets exhausted, so we * less likely that the module region gets exhausted, so we
* can simply omit this fallback in that case. * can simply omit this fallback in that case.
*/ */