WSL2-Linux-Kernel/arch/x86/mm
Linus Torvalds 798dec3304 x86-64: mm: clarify the 'positive addresses' user address rules
Dave Hansen found the "(long) addr >= 0" code in the x86-64 access_ok
checks somewhat confusing, and suggested using a helper to clarify what
the code is doing.

So this does exactly that: clarifying what the sign bit check is all
about, by adding a helper macro that makes it clear what it is testing.

This also adds some explicit comments talking about how even with LAM
enabled, any addresses with the sign bit will still GP-fault in the
non-canonical region just above the sign bit.

This is all what allows us to do the user address checks with just the
sign bit, and furthermore be a bit cavalier about accesses that might be
done with an additional offset even past that point.

(And yes, this talks about 'positive' even though zero is also a valid
user address and so technically we should call them 'non-negative'.  But
I don't think using 'non-negative' ends up being more understandable).

Suggested-by: Dave Hansen <dave.hansen@intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2023-05-03 10:37:22 -07:00
..
pat - Nick Piggin's "shoot lazy tlbs" series, to improve the peformance of 2023-04-27 19:42:02 -07:00
Makefile x86: kmsan: handle CPU entry area 2022-10-03 14:03:26 -07:00
amdtopology.c x86/mm: Replace nodes_weight() with nodes_empty() where appropriate 2022-04-10 22:35:38 +02:00
cpu_entry_area.c x86/mm: Do not shuffle CPU entry areas without KASLR 2023-03-22 10:42:47 -07:00
debug_pagetables.c x86/mm/dump_pagetables: remove MODULE_LICENSE in non-modules 2023-04-13 13:13:54 -07:00
dump_pagetables.c
extable.c x86-64: mm: clarify the 'positive addresses' user address rules 2023-05-03 10:37:22 -07:00
fault.c x86/mm: try VMA lock-based page fault handling first 2023-04-05 20:03:01 -07:00
highmem_32.c
hugetlbpage.c arch/x86/mm/hugetlbpage.c: pud_huge() returns 0 when using 2-level paging 2022-11-08 15:57:25 -08:00
ident_map.c
init.c Add support for new Linear Address Masking CPU feature. This is similar 2023-04-28 09:43:49 -07:00
init_32.c
init_64.c mm/sparse-vmemmap: generalise vmemmap_populate_hugepages() 2022-12-11 18:12:12 -08:00
iomap_32.c
ioremap.c x86/ioremap: Add hypervisor callback for private MMIO mapping in coco VM 2023-03-26 23:42:40 +02:00
kasan_init_64.c x86/kasan: Populate shadow for shared chunk of the CPU entry area 2022-12-15 10:37:28 -08:00
kaslr.c
kmmio.c x86/mm/kmmio: Remove redundant preempt_disable() 2022-12-12 10:54:48 -05:00
kmsan_shadow.c x86: kmsan: handle CPU entry area 2022-10-03 14:03:26 -07:00
maccess.c
mem_encrypt.c virtio: replace arch_has_restricted_virtio_memory_access() 2022-06-06 08:22:01 +02:00
mem_encrypt_amd.c x86/mm: Handle decryption/re-encryption of bss_decrypted consistently 2023-03-27 09:23:21 +02:00
mem_encrypt_boot.S x86/mm: Remove P*D_PAGE_MASK and P*D_PAGE_SIZE macros 2022-12-15 10:37:27 -08:00
mem_encrypt_identity.c x86/mm: Fix use of uninitialized buffer in sme_enable() 2023-03-16 12:22:25 +01:00
mm_internal.h
mmap.c
mmio-mod.c x86: Replace cpumask_weight() with cpumask_empty() where appropriate 2022-04-10 22:35:38 +02:00
numa.c x86/numa: Use cpumask_available instead of hardcoded NULL check 2022-08-03 11:44:57 +02:00
numa_32.c
numa_64.c
numa_emulation.c x86/mm: Replace nodes_weight() with nodes_empty() where appropriate 2022-04-10 22:35:38 +02:00
numa_internal.h
pf_in.c
pf_in.h
pgprot.c x86/mm: move protection_map[] inside the platform 2022-07-17 17:14:38 -07:00
pgtable.c mm/pgtable: Fix multiple -Wstringop-overflow warnings 2022-12-01 08:50:38 -08:00
pgtable_32.c
physaddr.c
physaddr.h
pkeys.c x86/pkeys: Clarify PKRU_AD_KEY macro 2022-06-07 16:06:33 -07:00
pti.c x86/mm: Remove P*D_PAGE_MASK and P*D_PAGE_SIZE macros 2022-12-15 10:37:27 -08:00
srat.c
testmmiotrace.c
tlb.c Add support for new Linear Address Masking CPU feature. This is similar 2023-04-28 09:43:49 -07:00