powerpc/mm/hash: Don't memset pgd table if not needed
We need to zero-out pgd table only if we share the slab cache with pud/pmd level caches. With the support of 4PB, we don't share the slab cache anymore. Instead of removing the code completely hide it within an #ifdef. We don't need to do this with any other page table level, because they all allocate table of double the size and we take of initializing the first half corrrectly during page table zap. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> [mpe: Consolidate multiple #if / #ifdef into one] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
This commit is contained in:
Родитель
c2b4d8b741
Коммит
872a100a49
|
@ -80,8 +80,18 @@ static inline pgd_t *pgd_alloc(struct mm_struct *mm)
|
|||
|
||||
pgd = kmem_cache_alloc(PGT_CACHE(PGD_INDEX_SIZE),
|
||||
pgtable_gfp_flags(mm, GFP_KERNEL));
|
||||
/*
|
||||
* With hugetlb, we don't clear the second half of the page table.
|
||||
* If we share the same slab cache with the pmd or pud level table,
|
||||
* we need to make sure we zero out the full table on alloc.
|
||||
* With 4K we don't store slot in the second half. Hence we don't
|
||||
* need to do this for 4k.
|
||||
*/
|
||||
#if defined(CONFIG_HUGETLB_PAGE) && defined(CONFIG_PPC_64K_PAGES) && \
|
||||
((H_PGD_INDEX_SIZE == H_PUD_CACHE_INDEX) || \
|
||||
(H_PGD_INDEX_SIZE == H_PMD_CACHE_INDEX))
|
||||
memset(pgd, 0, PGD_TABLE_SIZE);
|
||||
|
||||
#endif
|
||||
return pgd;
|
||||
}
|
||||
|
||||
|
|
Загрузка…
Ссылка в новой задаче