x86/mm: Undo double _PAGE_PSE clearing

When clearing _PAGE_PRESENT on a huge page, we need to be careful
to also clear _PAGE_PSE, otherwise it might still get confused
for a valid large page table entry.

We do that near the spot where we *set* _PAGE_PSE.  That's fine,
but it's unnecessary.  pgprot_large_2_4k() already did it.

BTW, I also noticed that pgprot_large_2_4k() and
pgprot_4k_2_large() are not symmetric.  pgprot_large_2_4k() clears
_PAGE_PSE (because it is aliased to _PAGE_PAT) but
pgprot_4k_2_large() does not put _PAGE_PSE back.  Bummer.

Also, add some comments and change "promote" to "move".  "Promote"
seems an odd word to move when we are logically moving a bit to a
lower bit position.  Also add an extra line return to make it clear
to which line the comment applies.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Hugh Dickins <hughd@google.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Kees Cook <keescook@google.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Nadav Amit <namit@vmware.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-mm@kvack.org
Link: http://lkml.kernel.org/r/20180406205504.9B0F44A9@viggo.jf.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
This commit is contained in:
Dave Hansen 2018-04-06 13:55:04 -07:00 коммит произвёл Ingo Molnar
Родитель d1440b23c9
Коммит 606c7193d5
1 изменённых файлов: 6 добавлений и 3 удалений

Просмотреть файл

@ -583,6 +583,7 @@ try_preserve_large_page(pte_t *kpte, unsigned long address,
* up accordingly. * up accordingly.
*/ */
old_pte = *kpte; old_pte = *kpte;
/* Clear PSE (aka _PAGE_PAT) and move PAT bit to correct position */
req_prot = pgprot_large_2_4k(old_prot); req_prot = pgprot_large_2_4k(old_prot);
pgprot_val(req_prot) &= ~pgprot_val(cpa->mask_clr); pgprot_val(req_prot) &= ~pgprot_val(cpa->mask_clr);
@ -597,8 +598,6 @@ try_preserve_large_page(pte_t *kpte, unsigned long address,
req_prot = pgprot_clear_protnone_bits(req_prot); req_prot = pgprot_clear_protnone_bits(req_prot);
if (pgprot_val(req_prot) & _PAGE_PRESENT) if (pgprot_val(req_prot) & _PAGE_PRESENT)
pgprot_val(req_prot) |= _PAGE_PSE; pgprot_val(req_prot) |= _PAGE_PSE;
else
pgprot_val(req_prot) &= ~_PAGE_PSE;
req_prot = canon_pgprot(req_prot); req_prot = canon_pgprot(req_prot);
/* /*
@ -684,8 +683,12 @@ __split_large_page(struct cpa_data *cpa, pte_t *kpte, unsigned long address,
switch (level) { switch (level) {
case PG_LEVEL_2M: case PG_LEVEL_2M:
ref_prot = pmd_pgprot(*(pmd_t *)kpte); ref_prot = pmd_pgprot(*(pmd_t *)kpte);
/* clear PSE and promote PAT bit to correct position */ /*
* Clear PSE (aka _PAGE_PAT) and move
* PAT bit to correct position.
*/
ref_prot = pgprot_large_2_4k(ref_prot); ref_prot = pgprot_large_2_4k(ref_prot);
ref_pfn = pmd_pfn(*(pmd_t *)kpte); ref_pfn = pmd_pfn(*(pmd_t *)kpte);
break; break;