Граф коммитов

47 Коммитов

Автор SHA1 Сообщение Дата
Helge Deller 24d0492b7d parisc: Fix TLB related boot crash on SMP machines
At bootup we run measurements to calculate the best threshold for when we
should be using full TLB flushes instead of just flushing a specific amount of
TLB entries.  This performance test is run over the kernel text segment.

But running this TLB performance test on the kernel text segment turned out to
crash some SMP machines when the kernel text pages were mapped as huge pages.

To avoid those crashes this patch simply skips this test on some SMP machines
and calculates an optimal threshold based on the maximum number of available
TLB entries and number of online CPUs.

On a technical side, this seems to happen:
The TLB measurement code uses flush_tlb_kernel_range() to flush specific TLB
entries with a page size of 4k (pdtlb 0(sr1,addr)). On UP systems this purge
instruction seems to work without problems even if the pages were mapped as
huge pages.  But on SMP systems the TLB purge instruction is broadcasted to
other CPUs. Those CPUs then crash the machine because the page size is not as
expected.  C8000 machines with PA8800/PA8900 CPUs were not affected by this
problem, because the required cache coherency prohibits to use huge pages at
all.  Sadly I didn't found any documentation about this behaviour, so this
finding is purely based on testing with phyiscal SMP machines (A500-44 and
J5000, both were 2-way boxes).

Cc: <stable@vger.kernel.org> # v3.18+
Signed-off-by: Helge Deller <deller@gmx.de>
2016-12-08 21:27:18 +01:00
John David Anglin 741dc7bf1c parisc: Fix races in parisc_setup_cache_timing()
Helge reported to me the following startup crash:

[    0.000000] Linux version 4.8.0-1-parisc64-smp (debian-kernel@lists.debian.org) (gcc version 5.4.1 20161019 (GCC) ) #1 SMP Debian 4.8.7-1 (2016-11-13)
[    0.000000] The 64-bit Kernel has started...
[    0.000000] Kernel default page size is 4 KB. Huge pages enabled with 1 MB physical and 2 MB virtual size.
[    0.000000] Determining PDC firmware type: System Map.
[    0.000000] model 9000/785/J5000
[    0.000000] Total Memory: 2048 MB
[    0.000000] Memory: 2018528K/2097152K available (9272K kernel code, 3053K rwdata, 1319K rodata, 1024K init, 840K bss, 78624K reserved, 0K cma-reserved)
[    0.000000] virtual kernel memory layout:
[    0.000000]     vmalloc : 0x0000000000008000 - 0x000000003f000000   (1007 MB)
[    0.000000]     memory  : 0x0000000040000000 - 0x00000000c0000000   (2048 MB)
[    0.000000]       .init : 0x0000000040100000 - 0x0000000040200000   (1024 kB)
[    0.000000]       .data : 0x0000000040b0e000 - 0x0000000040f533e0   (4372 kB)
[    0.000000]       .text : 0x0000000040200000 - 0x0000000040b0e000   (9272 kB)
[    0.768910] Brought up 1 CPUs
[    0.992465] NET: Registered protocol family 16
[    2.429981] Releasing cpu 1 now, hpa=fffffffffffa2000
[    2.635751] CPU(s): 2 out of 2 PA8500 (PCX-W) at 440.000000 MHz online
[    2.726692] Setting cache flush threshold to 1024 kB
[    2.729932] Not-handled unaligned insn 0x43ffff80
[    2.798114] Setting TLB flush threshold to 140 kB
[    2.928039] Unaligned handler failed, ret = -1
[    3.000419]       _______________________________
[    3.000419]      < Your System ate a SPARC! Gah! >
[    3.000419]       -------------------------------
[    3.000419]              \   ^__^
[    3.000419]                  (__)\       )\/\
[    3.000419]                   U  ||----w |
[    3.000419]                      ||     ||
[    9.340055] CPU: 1 PID: 0 Comm: swapper/1 Not tainted 4.8.0-1-parisc64-smp #1 Debian 4.8.7-1
[    9.448082] task: 00000000bfd48060 task.stack: 00000000bfd50000
[    9.528040]
[   10.760029] IASQ: 0000000000000000 0000000000000000 IAOQ: 000000004025d154 000000004025d158
[   10.868052]  IIR: 43ffff80    ISR: 0000000000340000  IOR: 000001ff54150960
[   10.960029]  CPU:        1   CR30: 00000000bfd50000 CR31: 0000000011111111
[   11.052057]  ORIG_R28: 000000004021e3b4
[   11.100045]  IAOQ[0]: irq_exit+0x94/0x120
[   11.152062]  IAOQ[1]: irq_exit+0x98/0x120
[   11.208031]  RP(r2): irq_exit+0xb8/0x120
[   11.256074] Backtrace:
[   11.288067]  [<00000000402cd944>] cpu_startup_entry+0x1e4/0x598
[   11.368058]  [<0000000040109528>] smp_callin+0x2c0/0x2f0
[   11.436308]  [<00000000402b53fc>] update_curr+0x18c/0x2d0
[   11.508055]  [<00000000402b73b8>] dequeue_entity+0x2c0/0x1030
[   11.584040]  [<00000000402b3cc0>] set_next_entity+0x80/0xd30
[   11.660069]  [<00000000402c1594>] pick_next_task_fair+0x614/0x720
[   11.740085]  [<000000004020dd34>] __schedule+0x394/0xa60
[   11.808054]  [<000000004020e488>] schedule+0x88/0x118
[   11.876039]  [<0000000040283d3c>] rescuer_thread+0x4d4/0x5b0
[   11.948090]  [<000000004028fc4c>] kthread+0x1ec/0x248
[   12.016053]  [<0000000040205020>] end_fault_vector+0x20/0xc0
[   12.092239]  [<00000000402050c0>] _switch_to_ret+0x0/0xf40
[   12.164044]
[   12.184036] CPU: 1 PID: 0 Comm: swapper/1 Not tainted 4.8.0-1-parisc64-smp #1 Debian 4.8.7-1
[   12.244040] Backtrace:
[   12.244040]  [<000000004021c480>] show_stack+0x68/0x80
[   12.244040]  [<00000000406f332c>] dump_stack+0xec/0x168
[   12.244040]  [<000000004021c74c>] die_if_kernel+0x25c/0x430
[   12.244040]  [<000000004022d320>] handle_unaligned+0xb48/0xb50
[   12.244040]
[   12.632066] ---[ end trace 9ca05a7215c7bbb2 ]---
[   12.692036] Kernel panic - not syncing: Attempted to kill the idle task!

We have the insn 0x43ffff80 in IIR but from IAOQ we should have:
   4025d150:   0f f3 20 df     ldd,s r19(r31),r31
   4025d154:   0f 9f 00 9c     ldw r31(ret0),ret0
   4025d158:   bf 80 20 58     cmpb,*<> r0,ret0,4025d18c <irq_exit+0xcc>

Cpu0 has just completed running parisc_setup_cache_timing:

[    2.429981] Releasing cpu 1 now, hpa=fffffffffffa2000
[    2.635751] CPU(s): 2 out of 2 PA8500 (PCX-W) at 440.000000 MHz online
[    2.726692] Setting cache flush threshold to 1024 kB
[    2.729932] Not-handled unaligned insn 0x43ffff80
[    2.798114] Setting TLB flush threshold to 140 kB
[    2.928039] Unaligned handler failed, ret = -1

From the backtrace, cpu1 is in smp_callin:

void __init smp_callin(void)
{
       int slave_id = cpu_now_booting;

       smp_cpu_init(slave_id);
       preempt_disable();

       flush_cache_all_local(); /* start with known state */
       flush_tlb_all_local(NULL);

       local_irq_enable();  /* Interrupts have been off until now */

       cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);

So, it has just flushed its caches and the TLB. It would seem either the
flushes in parisc_setup_cache_timing or smp_callin have corrupted kernel
memory.

The attached patch reworks parisc_setup_cache_timing to remove the races
in setting the cache and TLB flush thresholds. It also corrects the
number of bytes flushed in the TLB calculation.

The patch flushes the cache and TLB on cpu0 before starting the
secondary processors so that they are started from a known state.

Tested with a few reboots on c8000.

Signed-off-by: John David Anglin  <dave.anglin@bell.net>
Cc: <stable@vger.kernel.org> # v3.18+
Signed-off-by: Helge Deller <deller@gmx.de>
2016-11-25 12:31:57 +01:00
Al Viro 3baf32898e parisc: use %pD
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2016-08-07 23:38:49 -04:00
Kirill A. Shutemov 09cbfeaf1a mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
ago with promise that one day it will be possible to implement page
cache with bigger chunks than PAGE_SIZE.

This promise never materialized.  And unlikely will.

We have many places where PAGE_CACHE_SIZE assumed to be equal to
PAGE_SIZE.  And it's constant source of confusion on whether
PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
especially on the border between fs and mm.

Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
breakage to be doable.

Let's stop pretending that pages in page cache are special.  They are
not.

The changes are pretty straight-forward:

 - <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;

 - <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;

 - PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};

 - page_cache_get() -> get_page();

 - page_cache_release() -> put_page();

This patch contains automated changes generated with coccinelle using
script below.  For some reason, coccinelle doesn't patch header files.
I've called spatch for them manually.

The only adjustment after coccinelle is revert of changes to
PAGE_CAHCE_ALIGN definition: we are going to drop it later.

There are few places in the code where coccinelle didn't reach.  I'll
fix them manually in a separate patch.  Comments and documentation also
will be addressed with the separate patch.

virtual patch

@@
expression E;
@@
- E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E

@@
expression E;
@@
- E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
+ E

@@
@@
- PAGE_CACHE_SHIFT
+ PAGE_SHIFT

@@
@@
- PAGE_CACHE_SIZE
+ PAGE_SIZE

@@
@@
- PAGE_CACHE_MASK
+ PAGE_MASK

@@
expression E;
@@
- PAGE_CACHE_ALIGN(E)
+ PAGE_ALIGN(E)

@@
expression E;
@@
- page_cache_get(E)
+ get_page(E)

@@
expression E;
@@
- page_cache_release(E)
+ put_page(E)

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-04-04 10:41:08 -07:00
Helge Deller 2c2277dc8e parisc: Imporove debug info about space registers and TLB configuration
Signed-off-by: Helge Deller <deller@gmx.de>
2016-01-12 22:12:09 +01:00
John David Anglin 01ab605704 parisc: Fix some PTE/TLB race conditions and optimize __flush_tlb_range based on timing results
The increased use of pdtlb/pitlb instructions seemed to increase the
frequency of random segmentation faults building packages. Further, we
had a number of cases where TLB inserts would repeatedly fail and all
forward progress would stop. The Haskell ghc package caused a lot of
trouble in this area. The final indication of a race in pte handling was
this syslog entry on sibaris (C8000):

 swap_free: Unused swap offset entry 00000004
 BUG: Bad page map in process mysqld  pte:00000100 pmd:019bbec5
 addr:00000000ec464000 vm_flags:00100073 anon_vma:0000000221023828 mapping: (null) index:ec464
 CPU: 1 PID: 9176 Comm: mysqld Not tainted 4.0.0-2-parisc64-smp #1 Debian 4.0.5-1
 Backtrace:
  [<0000000040173eb0>] show_stack+0x20/0x38
  [<0000000040444424>] dump_stack+0x9c/0x110
  [<00000000402a0d38>] print_bad_pte+0x1a8/0x278
  [<00000000402a28b8>] unmap_single_vma+0x3d8/0x770
  [<00000000402a4090>] zap_page_range+0xf0/0x198
  [<00000000402ba2a4>] SyS_madvise+0x404/0x8c0

Note that the pte value is 0 except for the accessed bit 0x100. This bit
shouldn't be set without the present bit.

It should be noted that the madvise system call is probably a trigger for many
of the random segmentation faults.

In looking at the kernel code, I found the following problems:

1) The pte_clear define didn't take TLB lock when clearing a pte.
2) We didn't test pte present bit inside lock in exception support.
3) The pte and tlb locks needed to merged in order to ensure consistency
between page table and TLB. This also has the effect of serializing TLB
broadcasts on SMP systems.

The attached change implements the above and a few other tweaks to try
to improve performance. Based on the timing code, TLB purges are very
slow (e.g., ~ 209 cycles per page on rp3440). Thus, I think it
beneficial to test the split_tlb variable to avoid duplicate purges.
Probably, all PA 2.0 machines have combined TLBs.

I dropped using __flush_tlb_range in flush_tlb_mm as I realized all
applications and most threads have a stack size that is too large to
make this useful. I added some comments to this effect.

Since implementing 1 through 3, I haven't had any random segmentation
faults on mx3210 (rp3440) in about one week of building code and running
as a Debian buildd.

Signed-off-by: John David Anglin <dave.anglin@bell.net>
Cc: stable@vger.kernel.org # v3.18+
Signed-off-by: Helge Deller <deller@gmx.de>
2015-07-10 21:47:47 +02:00
Helge Deller 0ef36bd2b3 parisc: change value of SHMLBA from 0x00400000 to PAGE_SIZE
On parisc, SHMLBA was defined to 0x00400000 (4MB) to reflect that we need to
take care of our caches for shared mappings. But actually, we can map a file at
any multiple address of PAGE_SIZE, so let us correct that now with a value of
PAGE_SIZE for SHMLBA.  Instead we now take care of this cache colouring via the
constant SHM_COLOUR while we map shared pages.

Signed-off-by: Helge Deller <deller@gmx.de>
CC: Jeroen Roovers <jer@gentoo.org>
CC: John David Anglin <dave.anglin@bell.net>
CC: Carlos O'Donell <carlos@systemhalted.org>
Cc: stable@kernel.org [3.13+]
2014-04-13 15:00:53 +02:00
John David Anglin 4b02a72a26 parisc: Remove unused CONFIG_PARISC_TMPALIAS code
The attached change removes the unused and experimental
CONFIG_PARISC_TMPALIAS code. It doesn't work and I don't believe it will
ever be used.

Signed-off-by: John David Anglin <dave.anglin@bell.net>
Signed-off-by: Helge Deller <deller@gmx.de>
2014-03-23 16:46:30 +01:00
Helge Deller 57737c49dd parisc: fix cache-flushing
This commit:
f8dae00684d678afa13041ef170cecfd1297ed40: parisc: Ensure full cache coherency for kmap/kunmap
caused negative caching side-effects, e.g. hanging processes with expect and
too many inequivalent alias messages from flush_dcache_page() on Debian 5 systems.

This patch now partly reverts it and has been in production use on our debian buildd
makeservers since a week without any major problems.

Signed-off-by: Helge Deller <deller@gmx.de>
Signed-off-by: John David Anglin <dave.anglin@bell.net>
Cc: stable@vger.kernel.org # v3.9+
Signed-off-by: Helge Deller <deller@gmx.de>
2014-02-02 20:57:16 +01:00
John David Anglin f8dae00684 parisc: Ensure full cache coherency for kmap/kunmap
Helge Deller noted a few weeks ago problems with the AIO support on
parisc. This change is the result of numerous iterations on how best to
deal with this problem.

The solution adopted here is to provide full cache coherency in a
uniform manner on all parisc systems. This involves calling
flush_dcache_page() on kmap operations and flush_kernel_dcache_page() on
kunmap operations. As a result, the copy_user_page() and
clear_user_page() functions can be removed and the overall code is
simpler.

The change ensures that both userspace and kernel aliases to a mapped
page are invalidated and flushed. This is necessary for the correct
operation of PA8800 and PA8900 based systems which do not support
inequivalent aliases.

With this change, I have observed no cache related issues on c8000 and
rp3440. It is now possible for example to do kernel builds with "-j64"
on four way systems.

On systems using XFS file systems, the patch recently posted by Mikulas
Patocka to "fix crash using XFS on loopback" is needed to avoid a hang
caused by an uninitialized lock passed to flush_dcache_page() in the
page struct.

Signed-off-by: John David Anglin <dave.anglin@bell.net>
Cc: stable@vger.kernel.org # v3.9+
Signed-off-by: Helge Deller <deller@gmx.de>
2014-01-08 23:02:57 +01:00
Helge Deller a446e72bc1 Revert "parisc: Export flush_cache_page() (needed by lustre)"
This reverts commit 320c90be7b.

Christoph Hellwig <hch@infradead.org> commented:
This one shouldn't go in - Geert sent it a bit prematurely, as Lustre
shouldn't use it just to reimplement core VM functionality (which it
shouldn't use either, but that's a separate story).

Signed-off-by: Helge Deller <deller@gmx.de>
2013-10-19 21:37:40 +02:00
Geert Uytterhoeven 320c90be7b parisc: Export flush_cache_page() (needed by lustre)
ERROR: "flush_cache_page" [drivers/staging/lustre/lustre/libcfs/libcfs.ko] undefined!

Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Helge Deller <deller@gmx.de>
2013-10-13 17:44:17 +02:00
John David Anglin 50861f5a02 parisc: Fix cache routines to ignore vma's with an invalid pfn
The parisc architecture does not have a pte special bit. As a result,
special mappings are handled with the VM_PFNMAP and VM_MIXEDMAP flags.
VM_MIXEDMAP mappings may or may not have a "struct page" backing. When
pfn_valid() is false, there is no "struct page" backing. Otherwise, they
are treated as normal pages.

The FireGL driver uses the VM_MIXEDMAP without a backing "struct page".
This treatment caused a panic due to a TLB data miss in
update_mmu_cache. This appeared to be in the code generated for
page_address(). We were in fact using a very circular bit of code to
determine the physical address of the PFN in various cache routines.
This wasn't valid when there was no "struct page" backing.  The needed
address can in fact be determined simply from the PFN itself without
using the "struct page".

The attached patch updates update_mmu_cache(), flush_cache_mm(),
flush_cache_range() and flush_cache_page() to check pfn_valid() and to
directly compute the PFN physical and virtual addresses.

Signed-off-by: John David Anglin <dave.anglin@bell.net>
Cc: <stable@vger.kernel.org> # 3.10
Signed-off-by: Helge Deller <deller@gmx.de>
2013-07-31 23:41:47 +02:00
John David Anglin e8d8fc219f parisc: Ensure volatile space register %sr1 is not clobbered
I still see the occasional random segv on rp3440.  Looking at one of
these (a code 15), it appeared the problem must be with the cache
handling of anonymous pages.  Reviewing this, I noticed that the space
register %sr1 might be being clobbered when we flush an anonymous page.

Register %sr1 is used for TLB purges in a couple of places.  These
purges are needed on PA8800 and PA8900 processors to ensure cache
consistency of flushed cache lines.

The solution here is simply to move the %sr1 load into the TLB lock
region needed to ensure that one purge executes at a time on SMP
systems.  This was already the case for one use.  After a few days of
operation, I haven't had a random segv on my rp3440.

Signed-off-by: John David Anglin <dave.anglin@bell.net>
Cc: <stable@vger.kernel.org> # 3.10
Signed-off-by: Helge Deller <deller@gmx.de>
2013-07-09 22:09:22 +02:00
Zhao Hongjiang ba969c44ed parisc: remove the second argument of kmap_atomic
kmap_atomic allows only one argument now, just move the second.

Signed-off-by: Zhao Hongjiang <zhaohongjiang@huawei.com>
Signed-off-by: Helge Deller <deller@gmx.de>
2013-05-06 22:19:00 +02:00
John David Anglin bda079d336 parisc: use spin_lock_irqsave/spin_unlock_irqrestore for PTE updates
User applications running on SMP kernels have long suffered from instability
and random segmentation faults.  This patch improves the situation although
there is more work to be done.

One of the problems is the various routines in pgtable.h that update page table
entries use different locking mechanisms, or no lock at all (set_pte_at).  This
change modifies the routines to all use the same lock pa_dbit_lock.  This lock
is used for dirty bit updates in the interruption code. The patch also purges
the TLB entries associated with the PTE to ensure that inconsistent values are
not used after the page table entry is updated.  The UP and SMP code are now
identical.

The change also includes a minor update to the purge_tlb_entries function in
cache.c to improve its efficiency.

Signed-off-by: John David Anglin <dave.anglin@bell.net>
Cc: Helge Deller <deller@gmx.de>
Signed-off-by: Helge Deller <deller@gmx.de>
2013-04-25 22:37:00 +02:00
John David Anglin 027f27c4ec parisc: disable preemption while flushing D- or I-caches through TMPALIAS region
It is necessary to disable preemption during cache flushes done through the
TMPALIAS region to ensure that the TLB setup is not clobbered by another flush.

Signed-off-by: John David Anglin <dave.anglin@bell.net>
Signed-off-by: Helge Deller <deller@gmx.de>
2013-02-20 22:50:38 +01:00
John David Anglin cca8e90260 parisc: fixes and cleanups in page cache flushing (4/4)
CONFIG_PARISC_TMPALIAS enables clear_user_highpage and copy_user_highpage.
These are essentially alternative implementations of clear_user_page and
copy_user_page.  They don't have anything to do with x86 high pages, but they
build on the infrastructure to save a few instructions.  Read the comment in
clear_user_highpage as it is very important to the implementation.  For this
reason, there isn't any gain in using the TMPALIAS/highpage approach.

Signed-off-by: John David Anglin <dave.anglin@bell.net>
Signed-off-by: Helge Deller <deller@gmx.de>
2013-02-20 22:49:49 +01:00
John David Anglin 6d2439d955 parisc: fixes and cleanups in page cache flushing (3/4)
flush_cache_mm, for the non current case also uses flush_dcache_page_asm
and flush_icache_page_asm which are TMPALIAS flushes.

For the non current case, the algorithm used by get_ptep is derived from the
vmalloc_to_page implementation in vmalloc.c.  It is essentially a generic page
table lookup.  The other alternative was to duplicate the lookup in entry.S.
The break point for switching to a full cache flush is somewhat arbitrary.  The
same approach is used in flush_cache_range for non current case.  In a GCC
build and check, many small programs are executed and this change provided a
significant performance enhancement, e.g. GCC build time was cut almost in half
on a rp3440 at j4.  Previously, we always flushed the entire cache.

Signed-off-by: John David Anglin <dave.anglin@bell.net>
Signed-off-by: Helge Deller <deller@gmx.de>
2013-02-20 22:49:38 +01:00
John David Anglin 7633453978 parisc: fixes and cleanups in page cache flushing (1/4)
This is the first patch in a series of 4, with which the page cache flushing of
parisc will gets fixed and enhanced. This even fixes the nasty "minifail" bug
(http://wiki.parisc-linux.org/TestCases?highlight=%28minifail%29) which
prevented parisc to stay an official debian port.  Basically the flush in
copy_user_page together with the TLB patch from commit
7139bc1579 is what fixes the minifail bug.

This patch still uses the TMPALIAS approach.  The new copy_user_page
implementation calls flush_dcache_page_asm to flush the user dcache page
(crucial for minifail fix) via a kernel TMPALIAS mapping.  After that, it just
copies the page using the kernel mapping.  It does a final flush if needed.
Generally it is hard to avoid doing some cache flushes using the kernel mapping
(e.g., copy_to_user_page and copy_from_user_page).

This patch depends on a subsequent change to pacache.S implementing
clear_page_asm and copy_page_asm.  These are optimized routines to clear and
copy a page.  The calls in clear_user_page and copy_user_page could be replaced
by calls to memset and memcpy, respectively.  I tested prefetch optimizations
in clear_page_asm and copy_page_asm but didn't see any significant performance
improvement on rp3440.  I'm not sure if these are routines are significantly
faster than memset and/or memcpy, but they are there for further performance
evaluation.

Signed-off-by: John David Anglin <dave.anglin@bell.net>
Signed-off-by: Helge Deller <deller@gmx.de>
2013-02-20 22:49:19 +01:00
John David Anglin 7139bc1579 [PARISC] Purge existing TLB entries in set_pte_at and ptep_set_wrprotect
This patch goes a long way toward fixing the minifail bug, and
it  significantly improves the stability of SMP machines such as
the rp3440.  When write  protecting a page for COW, we need to
purge the existing translation.  Otherwise, the COW break
doesn't occur as expected because the TLB may still have a stale entry
which allows writes.

[jejb: fix up checkpatch errors]
Signed-off-by: John David Anglin <dave.anglin@bell.net>
Cc: stable@vger.kernel.org
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
2013-01-19 10:54:45 +00:00
Michel Lespinasse 6b2dbba8b6 mm: replace vma prio_tree with an interval tree
Implement an interval tree as a replacement for the VMA prio_tree.  The
algorithms are similar to lib/interval_tree.c; however that code can't be
directly reused as the interval endpoints are not explicitly stored in the
VMA.  So instead, the common algorithm is moved into a template and the
details (node type, how to get interval endpoints from the node, etc) are
filled in using the C preprocessor.

Once the interval tree functions are available, using them as a
replacement to the VMA prio tree is a relatively simple, mechanical job.

Signed-off-by: Michel Lespinasse <walken@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Hillf Danton <dhillf@gmail.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: David Woodhouse <dwmw2@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-10-09 16:22:39 +09:00
David Howells 527dcdccd6 Disintegrate asm/system.h for PA-RISC
Disintegrate asm/system.h for PA-RISC.

Signed-off-by: David Howells <dhowells@redhat.com>
cc: linux-parisc@vger.kernel.org
2012-03-28 18:30:02 +01:00
James Bottomley b7d4581844 [PARISC] prevent speculative re-read on cache flush
According to Appendix F, the TLB is the primary arbiter of speculation.
Thus, if a page has a TLB entry, it may be speculatively read into the
cache.  On linux, this can cause us incoherencies because if we're about
to do a disk read, we call get_user_pages() to do the flush/invalidate
in user space, but we still potentially have the user TLB entries, and
the cache could speculate the lines back into userspace (thus causing
stale data to be used).  This is fixed by purging the TLB entries before
we flush through the tmpalias space.  Now, the only way the line could
be re-speculated is if the user actually tries to touch it (which is not
allowed).

Signed-off-by: James Bottomley <James.Bottomley@suse.de>
2011-04-15 12:55:56 -05:00
James Bottomley f311847c2f parisc: flush pages through tmpalias space
The kernel has an 8M tmpailas space (originally designed for copying
and clearing pages but now only used for clearing).  The idea is
to place zeros into the cache above a physical page rather than into
the physical page and flush the cache, because often the zeros end up
being replaced quickly anyway.

We can also use the tmpalias space for flushing a page.  The difference
here is that we have to do tmpalias processing in the non access data and
instruction traps.  The principle is the same: as long as we know the physical
address and have a virtual address congruent to the real one, the flush will
be effective.

In order to use the tmpalias space, the icache miss path has to be enhanced to
check for the alias region to make the fic instruction effective.

Signed-off-by: James Bottomley <James.Bottomley@suse.de>
2011-01-15 08:44:40 -06:00
Frans Pop a3bee03e71 parisc: remove trailing space in messages
Signed-off-by: Frans Pop <elendil@planet.nl>
Cc: linux-parisc@vger.kernel.org
Cc: Kyle McMartin <kyle@mcmartin.ca>
Cc: Helge Deller <deller@gmx.de>
Signed-off-by: Kyle McMartin <kyle@mcmartin.ca>
2010-03-06 22:54:09 +00:00
Russell King 4b3073e1c5 MM: Pass a PTE pointer to update_mmu_cache() rather than the PTE itself
On VIVT ARM, when we have multiple shared mappings of the same file
in the same MM, we need to ensure that we have coherency across all
copies.  We do this via make_coherent() by making the pages
uncacheable.

This used to work fine, until we allowed highmem with highpte - we
now have a page table which is mapped as required, and is not available
for modification via update_mmu_cache().

Ralf Beache suggested getting rid of the PTE value passed to
update_mmu_cache():

  On MIPS update_mmu_cache() calls __update_tlb() which walks pagetables
  to construct a pointer to the pte again.  Passing a pte_t * is much
  more elegant.  Maybe we might even replace the pte argument with the
  pte_t?

Ben Herrenschmidt would also like the pte pointer for PowerPC:

  Passing the ptep in there is exactly what I want.  I want that
  -instead- of the PTE value, because I have issue on some ppc cases,
  for I$/D$ coherency, where set_pte_at() may decide to mask out the
  _PAGE_EXEC.

So, pass in the mapped page table pointer into update_mmu_cache(), and
remove the PTE value, updating all implementations and call sites to
suit.

Includes a fix from Stephen Rothwell:

  sparc: fix fallout from update_mmu_cache API change

  Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>

Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-02-20 16:41:46 +00:00
Helge Deller e82a3b7512 parisc: ensure broadcast tlb purge runs single threaded
The TLB flushing functions on hppa, which causes PxTLB broadcasts on the system
bus, needs to be protected by irq-safe spinlocks to avoid irq handlers to deadlock
the kernel. The deadlocks only happened during I/O intensive loads and triggered
pretty seldom, which is why this bug went so long unnoticed.

Signed-off-by: Helge Deller <deller@gmx.de>
[edited to use spin_lock_irqsave on UP as well since we'd been locking there
 all this time anyway, --kyle]
Signed-off-by: Kyle McMartin <kyle@mcmartin.ca>
2009-07-03 03:34:09 +00:00
Alexander Beregalov 071327ec90 parisc: remove CVS keywords
Signed-off-by: Alexander Beregalov <a.beregalov@gmail.com>
Acked-by: Matthew Wilcox <willy@linux.intel.com>
Acked-by: Grant Grundler <grundler@parisc-linux.org>
Signed-off-by: Kyle McMartin <kyle@mcmartin.ca>
2009-07-03 03:34:06 +00:00
Helge Deller 8980a7baf9 parisc: BUG_ON() cleanup
- convert a few "if (xx) BUG();" to BUG_ON(xx)
- remove a few printk()s, as we get a backtrace with BUG_ON() anyway

Signed-off-by: Helge Deller <deller@gmx.de>
Signed-off-by: Kyle McMartin <kyle@mcmartin.ca>
2009-03-13 01:16:35 -04:00
Jens Axboe 15c8b6c1aa on_each_cpu(): kill unused 'retry' parameter
It's not even passed on to smp_call_function() anymore, since that
was removed. So kill it.

Acked-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Reviewed-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
2008-06-26 11:24:38 +02:00
Joe Perches 9eea51808a arch/parisc/: Spelling fixes
Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: Adrian Bunk <bunk@kernel.org>
2008-02-03 16:58:20 +02:00
Alexey Dobriyan e8edc6e03a Detach sched.h from mm.h
First thing mm.h does is including sched.h solely for can_do_mlock() inline
function which has "current" dereference inside. By dealing with can_do_mlock()
mm.h can be detached from sched.h which is good. See below, why.

This patch
a) removes unconditional inclusion of sched.h from mm.h
b) makes can_do_mlock() normal function in mm/mlock.c
c) exports can_do_mlock() to not break compilation
d) adds sched.h inclusions back to files that were getting it indirectly.
e) adds less bloated headers to some files (asm/signal.h, jiffies.h) that were
   getting them indirectly

Net result is:
a) mm.h users would get less code to open, read, preprocess, parse, ... if
   they don't need sched.h
b) sched.h stops being dependency for significant number of files:
   on x86_64 allmodconfig touching sched.h results in recompile of 4083 files,
   after patch it's only 3744 (-8.3%).

Cross-compile tested on

	all arm defconfigs, all mips defconfigs, all powerpc defconfigs,
	alpha alpha-up
	arm
	i386 i386-up i386-defconfig i386-allnoconfig
	ia64 ia64-up
	m68k
	mips
	parisc parisc-up
	powerpc powerpc-up
	s390 s390-up
	sparc sparc-up
	sparc64 sparc64-up
	um-x86_64
	x86_64 x86_64-up x86_64-defconfig x86_64-allnoconfig

as well as my two usual configs.

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-05-21 09:18:19 -07:00
Helge Deller 2f75c12c66 [PARISC] Fixes /proc/cpuinfo cache output on B160L
Signed-off-by: Helge Deller <deller@gmx.de>
Signed-off-by: Kyle McMartin <kyle@parisc-linux.org>
2007-02-17 01:15:16 -05:00
Randolph Chung d6ce8626db [PARISC] Clean up the cache and tlb headers
No changes in functionality.

Signed-off-by: Randolph Chung <tausq@debian.org>
Signed-off-by: Kyle McMartin <kyle@parisc-linux.org>
2007-02-17 00:41:30 -05:00
Matthew Wilcox 8d0b7d1055 [PARISC] Export clear_user_page to modules
Signed-off-by: Matthew Wilcox <willy@parisc-linux.org>
Signed-off-by: Kyle McMartin <kyle@parisc-linux.org>
2006-10-04 06:51:16 -06:00
Matthew Wilcox d207ac0f7c [PARISC] Fix CONFIG_DEBUG_SPINLOCK
Joel Soete points out that we refer to pa_tlb_lock but only define it if
CONFIG_SMP which breaks a uniprocessor build with CONFIG_DEBUG_SPINLOCK
enabled.  No module refers to pa_tlb_lock, so we can delete the export.

Signed-off-by: Matthew Wilcox <willy@parisc-linux.org>
Signed-off-by: Kyle McMartin <kyle@parisc-linux.org>
2006-10-04 06:50:46 -06:00
James Bottomley 20f4d3cb9b [PARISC] parisc specific kmap API implementation for pa8800
This patch fixes the pa8800 at a gross level (there are still other
subtle incoherency issues which can still cause crashes and HPMCs).

What it does is try to force eject inequivalent aliases before they
become visible to the L2 cache (which is where we get the incoherence
problems).

A new function (parisc_requires_coherency) is introduced in
asm/processor.h to identify the pa8x00 processors (8800 and 8900)
which have the issue.

Signed-off-by: James Bottomley <James.Bottomley@SteelEye.com>
Signed-off-by: Kyle McMartin <kyle@parisc-linux.org>
2006-10-04 06:46:21 -06:00
Kyle McMartin a9d2d386c4 [PARISC] Ensure Space ID hashing is turned off
Check PDC_CACHE to see if spaceid hashing is turned on, and fail to
boot if that is the case.

However, some old machines do not implement the PDC_CACHE_RET_SPID
firmware call, so continue to boot if the call fails because of
PDC_BAD_OPTION (but fail in all other error returns).

Signed-off-by: Kyle McMartin <kyle@parisc-linux.org>
2006-06-27 23:28:42 +00:00
Kyle McMartin e5a2e7fdb5 [PARISC] Match show_cache_info with reality
show_cache_info and struct pdc_cache_cf were out of sync with
published documentation. Fix the reporting of cache associativity
and update the pdc_cache_cf bitfields to match documentation.

Signed-off-by: Kyle McMartin <kyle@parisc-linux.org>
2006-06-27 23:28:41 +00:00
Helge Deller 67a5a59d33 [PARISC] Misc. janitorial work
Fix a spelling mistake, add a KERN_INFO flag, and fix some whitespace
uglies.

Signed-off-by: Helge Deller <deller@parisc-linux.org>
Signed-off-by: Kyle McMartin <kyle@parisc-linux.org>
2006-04-21 22:20:32 +00:00
James Bottomley ba57583396 [PARISC] Add parisc implementation of flush_kernel_dcache_page()
We need to do a little renaming of our original syntax because
of the difference in arguments.

Signed-off-by: James Bottomley <jejb@parisc-linux.org>
Signed-off-by: Kyle McMartin <kyle@parisc-linux.org>
2006-03-30 17:48:44 +00:00
Matthew Wilcox 1b2425e3c7 [PARISC] Make local cache flushes take a void *
Make flush_data_cache_local, flush_instruction_cache_local and
flush_tlb_all_local take a void * so they don't have to be cast
when using on_each_cpu().  This becomes a problem when on_each_cpu
is a macro (as it is in current -mm).

Also move the prototype of flush_tlb_all_local into tlbflush.h and
remove its declaration from .c files.

Signed-off-by: Matthew Wilcox <willy@parisc-linux.org>
Signed-off-by: Kyle McMartin <kyle@parisc-linux.org>
2006-01-10 21:49:21 -05:00
Helge Deller 8039de10aa [PARISC] Add __read_mostly section for parisc
Flag a whole bunch of things as __read_mostly on parisc. Also flag a few
branches as unlikely() and cleanup a bit of code.

Signed-off-by: Helge Deller <deller@parisc-linux.org>
Signed-off-by: Kyle McMartin <kyle@parisc-linux.org>
2006-01-10 20:35:03 -05:00
Hugh Dickins 92dc6fcc84 [PATCH] mm: parisc pte atomicity
There's a worrying function translation_exists in parisc cacheflush.h,
unaffected by split ptlock since flush_dcache_page is using it on some other
mm, without any relevant lock.  Oh well, make it a slightly more robust by
factoring the pfn check within it.  And it looked liable to confuse a
camouflaged swap or file entry with a good pte: fix that too.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-10-29 21:40:42 -07:00
Stuart Brady 2464212f68 [PARISC] Fix parisc_setup_cache_timing to choose a better flush threshold
update comment about CAFL_STRIDE

Signed-off-by: Kyle McMartin <kyle@parisc-linux.org>

Fixed a bug in parisc_setup_cache_timing() which caused it to calculate
a poor value for parisc_cache_flush_threshold.

Thanks to Joel Soete for spotting the bug.
Thanks to James Bottomley for pointing out the clean way to fix this.

Signed-off-by: Stuart Brady <sdb@parisc-linux.org>

Signed-off-by: Kyle McMartin <kyle@parisc-linux.org>
2005-10-21 22:44:14 -04:00
Linus Torvalds 1da177e4c3 Linux-2.6.12-rc2
Initial git repository build. I'm not bothering with the full history,
even though we have it. We can create a separate "historical" git
archive of that later if we want to, and in the meantime it's about
3.2GB when imported into git - space that would just make the early
git days unnecessarily complicated, when we don't have a lot of good
infrastructure for it.

Let it rip!
2005-04-16 15:20:36 -07:00