WSL2-Linux-Kernel/arch/parisc
John David Anglin 01ab605704 parisc: Fix some PTE/TLB race conditions and optimize __flush_tlb_range based on timing results
The increased use of pdtlb/pitlb instructions seemed to increase the
frequency of random segmentation faults building packages. Further, we
had a number of cases where TLB inserts would repeatedly fail and all
forward progress would stop. The Haskell ghc package caused a lot of
trouble in this area. The final indication of a race in pte handling was
this syslog entry on sibaris (C8000):

 swap_free: Unused swap offset entry 00000004
 BUG: Bad page map in process mysqld  pte:00000100 pmd:019bbec5
 addr:00000000ec464000 vm_flags:00100073 anon_vma:0000000221023828 mapping: (null) index:ec464
 CPU: 1 PID: 9176 Comm: mysqld Not tainted 4.0.0-2-parisc64-smp #1 Debian 4.0.5-1
 Backtrace:
  [<0000000040173eb0>] show_stack+0x20/0x38
  [<0000000040444424>] dump_stack+0x9c/0x110
  [<00000000402a0d38>] print_bad_pte+0x1a8/0x278
  [<00000000402a28b8>] unmap_single_vma+0x3d8/0x770
  [<00000000402a4090>] zap_page_range+0xf0/0x198
  [<00000000402ba2a4>] SyS_madvise+0x404/0x8c0

Note that the pte value is 0 except for the accessed bit 0x100. This bit
shouldn't be set without the present bit.

It should be noted that the madvise system call is probably a trigger for many
of the random segmentation faults.

In looking at the kernel code, I found the following problems:

1) The pte_clear define didn't take TLB lock when clearing a pte.
2) We didn't test pte present bit inside lock in exception support.
3) The pte and tlb locks needed to merged in order to ensure consistency
between page table and TLB. This also has the effect of serializing TLB
broadcasts on SMP systems.

The attached change implements the above and a few other tweaks to try
to improve performance. Based on the timing code, TLB purges are very
slow (e.g., ~ 209 cycles per page on rp3440). Thus, I think it
beneficial to test the split_tlb variable to avoid duplicate purges.
Probably, all PA 2.0 machines have combined TLBs.

I dropped using __flush_tlb_range in flush_tlb_mm as I realized all
applications and most threads have a stack size that is too large to
make this useful. I added some comments to this effect.

Since implementing 1 through 3, I haven't had any random segmentation
faults on mx3210 (rp3440) in about one week of building code and running
as a Debian buildd.

Signed-off-by: John David Anglin <dave.anglin@bell.net>
Cc: stable@vger.kernel.org # v3.18+
Signed-off-by: Helge Deller <deller@gmx.de>
2015-07-10 21:47:47 +02:00
..
configs parisc: Update defconfigs which were missing CONFIG_NET. 2014-09-24 14:34:29 -04:00
include parisc: Fix some PTE/TLB race conditions and optimize __flush_tlb_range based on timing results 2015-07-10 21:47:47 +02:00
kernel parisc: Fix some PTE/TLB race conditions and optimize __flush_tlb_range based on timing results 2015-07-10 21:47:47 +02:00
lib parisc: percpu: update comments referring to __get_cpu_var 2014-12-13 12:42:53 -08:00
math-emu parisc: remove duplicate define 2013-11-07 22:28:15 +01:00
mm mm/fault, arch: Use pagefault_disable() to check for disabled pagefaults in the handler 2015-05-19 08:39:15 +02:00
oprofile
Kconfig parisc: expose number of page table levels on Kconfig level 2015-04-14 16:49:02 -07:00
Kconfig.debug consolidate per-arch stack overflow debugging options 2013-07-04 11:25:39 -07:00
Makefile Merge branch 'kbuild' of git://git.kernel.org/pub/scm/linux/kernel/git/mmarek/kbuild 2015-02-19 10:07:08 -08:00
defpalo.conf parisc: switch to gzip-compressed vmlinuz kernel 2013-07-09 22:09:20 +02:00
install.sh parisc: make "make install" not depend on vmlinux 2013-11-07 22:28:06 +01:00
nm