I have two fixes:
 
 * 16KiB kernel stacks on rv64, which fixes a lot of crashes.
 * Rolling an mmiowb() into the scheduled, which when combined with Will's fix
   to the mmiowb()-on-spinlock should fix the PREEMPT issues we've been seeing.
 -----BEGIN PGP SIGNATURE-----
 
 iQJHBAABCgAxFiEEKzw3R0RoQ7JKlDp6LhMZ81+7GIkFAl8TMn0THHBhbG1lckBk
 YWJiZWx0LmNvbQAKCRAuExnzX7sYiSYpEADJ/RVmg+79nqy+EOiY+YLVCEhIWnVY
 KCDru9qEmO878QGQXYrwWwAmt+uWxgPdk7So/4E8IDErHp4V8wBz9C0cRm/0ReDd
 0tslp1P6v8NZXHmHUPhv2pAN5WoKe1pe83W5lpbO/0TxftyhuxmaKN92cQGTKOUH
 dMiP1LYgjd+0n+KAcMmRR63aUSoH4AXKiZcZu+GxXTXtb42CvUKFp/gPur5LUoak
 XvKB8eQsBPz8r4I4gFPw0XU0q4IfVgRiOWEPZefPWh72ngurbCPukCyc94tPOfsq
 PG/5I5oWveuFg7/gigNauHGCsttuLNxQXIAdnHzPWDFg3HHcUo1pCVoqQxXQX7In
 uYM+ZCCB5A0WQkUAtItKHpGzNDEA68APW34iR+RtX8374fnlGt9viFaNSF0phTcC
 GGq6YwV2c4m10vJxciOJapYyWsu6oLclmcmRCdKEpO0nHHEp7VGVAnVEYPV+OfnW
 Z8CuE2UAxQF7V6l7BrXmZFwGcxAt/0an9nuvI19CQkhkr0hnL58VLfNCS1a+w0xh
 Zu9ZYO0sHKlvyzgzkOxjOVe2H3pYgmDLWIAVqGC5R9sruYr0sr9QtKR324wsh/bd
 /g/5b2H6lLZQMXqoHsCf6OAliEl18+yGTiU+r9Ikb0aWf3OGCGiYvoMKS9AXT+K6
 /9UjXGhrOFrA0A==
 =XoUI
 -----END PGP SIGNATURE-----

Merge tag 'riscv-for-linus-5.8-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux into master

Pull RISC-V fixes from Palmer Dabbelt:
 "Two fixes:

   - 16KiB kernel stacks on rv64, which fixes a lot of crashes.

   - Rolling an mmiowb() into the scheduler, which when combined with
     Will's fix to the mmiowb()-on-spinlock should fix the PREEMPT
     issues we've been seeing"

* tag 'riscv-for-linus-5.8-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux:
  RISC-V: Upgrade smp_mb__after_spinlock() to iorw,iorw
  riscv: use 16KB kernel stack on 64-bit
This commit is contained in:
Linus Torvalds 2020-07-18 11:10:06 -07:00
Родитель 721db9dfb1 38b7c2a3ff
Коммит 6cf7ccba29
2 изменённых файлов: 13 добавлений и 1 удалений

Просмотреть файл

@ -58,8 +58,16 @@ do { \
* The AQ/RL pair provides a RCpc critical section, but there's not really any
* way we can take advantage of that here because the ordering is only enforced
* on that one lock. Thus, we're just doing a full fence.
*
* Since we allow writeX to be called from preemptive regions we need at least
* an "o" in the predecessor set to ensure device writes are visible before the
* task is marked as available for scheduling on a new hart. While I don't see
* any concrete reason we need a full IO fence, it seems safer to just upgrade
* this in order to avoid any IO crossing a scheduling boundary. In both
* instances the scheduler pairs this with an mb(), so nothing is necessary on
* the new hart.
*/
#define smp_mb__after_spinlock() RISCV_FENCE(rw,rw)
#define smp_mb__after_spinlock() RISCV_FENCE(iorw,iorw)
#include <asm-generic/barrier.h>

Просмотреть файл

@ -12,7 +12,11 @@
#include <linux/const.h>
/* thread information allocation */
#ifdef CONFIG_64BIT
#define THREAD_SIZE_ORDER (2)
#else
#define THREAD_SIZE_ORDER (1)
#endif
#define THREAD_SIZE (PAGE_SIZE << THREAD_SIZE_ORDER)
#ifndef __ASSEMBLY__