RISC-V Patches for the 5.2 Merge Window, Part 1 v3

This patch set contains an assortment of RISC-V related patches that I'd
 like to target for the 5.2 merge window.  Most of the patches are
 cleanups, but there are a handful of user-visible changes:
 
 * The nosmp and nr_cpus command-line arguments are now supported, which
   work like normal.
 * The SBI console no longer installs itself as a preferred console, we
   rely on standard mechanisms (/chosen, command-line, hueristics)
   instead.
 * sfence_remove_sfence_vma{,_asid} now pass their arguments along to the
   SBI call.
 * Modules now support BUG().
 * A missing sfence.vma during boot has been added.  This bug only
   manifests during boot.
 * The arch/riscv support for SiFive's L2 cache controller has been
   merged, which should un-block the EDAC framework work.
 
 I've only tested this on QEMU again, as I didn't have time to get things
 running on the Unleashed.  The latest master from this morning merges in
 cleanly and passes the tests as well.
 
 This patch set rebased my "5.2 MW, Part 1" patch set which includes an
 erronous empty file.  It's also a rebase of my "5.2 MW, Part 2" patch
 set, in which I managed to create another file while attempting to
 remove the empty file.
 
 Sorry for all the noise!
 -----BEGIN PGP SIGNATURE-----
 
 iQJHBAABCAAxFiEEAM520YNJYN/OiG3470yhUCzLq0EFAlzeLhUTHHBhbG1lckBk
 YWJiZWx0LmNvbQAKCRDvTKFQLMurQXV/D/9nz8KYxNKOVIXft27mw93Qnx5joblg
 fibA7nGDuxCszSC3tfyaROJZuGKe1G24vP4RG7aVs+iwRmmFhtVdPwm7ZvIr+DfU
 a5mzwWkxhMZP8lgxMAIn7iM/NWrBm7rWdGTU0BYjHlGkQ5z3WA67rU/r/vrowhUN
 zK1U/ATLvFWDJv5rdDj8/T2rDJzWtAsuy2qlmQN30CCJoOXXgIdAj+fVG4IYoxO9
 2+NFJU4Y0a+YczWW3qaGFjTaYYt/sNr/uA8AoBNqV1NvsopK1UO3txbcfJwvZZC3
 JFU9WBjC7xuF2ihMWecIZ7XljZeqhlsP7lZDizatQ/mdL9k7+6elk1sdcNLC23dN
 VWJakudE42dISCwSh49fAbeNSl/3R5VWSlZmVO18gsmslkGa4FwuoKjklnxx7hYx
 fQfvaqMIEXy3YmKtmFneUXLdcGoWOjV0FfDh5Ye582tAmB2TzvgEJHPJI7suUA/a
 RkZHcmVJTSRBMe2fS0qkYxy/wdIDtRW2yjypssl9G6zQPPCVW+maD70m/9oVdsgm
 IL8MpoDxW0uAYsV8Ctt1/+Ux+BObMADIml/1HPQyBRA0qhorQQWk0TcbjEXeIShs
 OOG8byAQUJx98z62zrKQ53+Pxdevcja6uKxu3f0yEHxl19dBJdT2BM6rjs3sO1hi
 c3tX/U8o39H0Kg==
 =mZwx
 -----END PGP SIGNATURE-----

Merge tag 'riscv-for-linus-5.2-mw2' of git://git.kernel.org/pub/scm/linux/kernel/git/palmer/riscv-linux

Pull RISC-V updates from Palmer Dabbelt:
 "This contains an assortment of RISC-V related patches that I'd like to
  target for the 5.2 merge window. Most of the patches are cleanups, but
  there are a handful of user-visible changes:

   - The nosmp and nr_cpus command-line arguments are now supported,
     which work like normal.

   - The SBI console no longer installs itself as a preferred console,
     we rely on standard mechanisms (/chosen, command-line, hueristics)
     instead.

   - sfence_remove_sfence_vma{,_asid} now pass their arguments along to
     the SBI call.

   - Modules now support BUG().

   - A missing sfence.vma during boot has been added. This bug only
     manifests during boot.

   - The arch/riscv support for SiFive's L2 cache controller has been
     merged, which should un-block the EDAC framework work.

  I've only tested this on QEMU again, as I didn't have time to get
  things running on the Unleashed. The latest master from this morning
  merges in cleanly and passes the tests as well"

* tag 'riscv-for-linus-5.2-mw2' of git://git.kernel.org/pub/scm/linux/kernel/git/palmer/riscv-linux: (31 commits)
  riscv: fix locking violation in page fault handler
  RISC-V: sifive_l2_cache: Add L2 cache controller driver for SiFive SoCs
  RISC-V: Add DT documentation for SiFive L2 Cache Controller
  RISC-V: Avoid using invalid intermediate translations
  riscv: Support BUG() in kernel module
  riscv: Add the support for c.ebreak check in is_valid_bugaddr()
  riscv: support trap-based WARN()
  riscv: fix sbi_remote_sfence_vma{,_asid}.
  riscv: move switch_mm to its own file
  riscv: move flush_icache_{all,mm} to cacheflush.c
  tty: Don't force RISCV SBI console as preferred console
  RISC-V: Access CSRs using CSR numbers
  RISC-V: Add interrupt related SCAUSE defines in asm/csr.h
  RISC-V: Use tabs to align macro values in asm/csr.h
  RISC-V: Fix minor checkpatch issues.
  RISC-V: Support nr_cpus command line option.
  RISC-V: Implement nosmp commandline option.
  RISC-V: Add RISC-V specific arch_match_cpu_phys_id
  riscv: vdso: drop unnecessary cc-ldoption
  riscv: call pm_power_off from machine_halt / machine_power_off
  ...
This commit is contained in:
Linus Torvalds 2019-05-19 09:56:36 -07:00
Родитель 72cf0b0741 8fef9900d4
Коммит b0bb1269b9
36 изменённых файлов: 632 добавлений и 318 удалений

Просмотреть файл

@ -0,0 +1,51 @@
SiFive L2 Cache Controller
--------------------------
The SiFive Level 2 Cache Controller is used to provide access to fast copies
of memory for masters in a Core Complex. The Level 2 Cache Controller also
acts as directory-based coherency manager.
All the properties in ePAPR/DeviceTree specification applies for this platform
Required Properties:
--------------------
- compatible: Should be "sifive,fu540-c000-ccache" and "cache"
- cache-block-size: Specifies the block size in bytes of the cache.
Should be 64
- cache-level: Should be set to 2 for a level 2 cache
- cache-sets: Specifies the number of associativity sets of the cache.
Should be 1024
- cache-size: Specifies the size in bytes of the cache. Should be 2097152
- cache-unified: Specifies the cache is a unified cache
- interrupts: Must contain 3 entries (DirError, DataError and DataFail signals)
- reg: Physical base address and size of L2 cache controller registers map
Optional Properties:
--------------------
- next-level-cache: phandle to the next level cache if present.
- memory-region: reference to the reserved-memory for the L2 Loosely Integrated
Memory region. The reserved memory node should be defined as per the bindings
in reserved-memory.txt
Example:
cache-controller@2010000 {
compatible = "sifive,fu540-c000-ccache", "cache";
cache-block-size = <64>;
cache-level = <2>;
cache-sets = <1024>;
cache-size = <2097152>;
cache-unified;
interrupt-parent = <&plic0>;
interrupts = <1 2 3>;
reg = <0x0 0x2010000 0x0 0x1000>;
next-level-cache = <&L25 &L40 &L36>;
memory-region = <&l2_lim>;
};

Просмотреть файл

@ -27,7 +27,7 @@ config RISCV
select GENERIC_STRNCPY_FROM_USER
select GENERIC_STRNLEN_USER
select GENERIC_SMP_IDLE_THREAD
select GENERIC_ATOMIC64 if !64BIT || !RISCV_ISA_A
select GENERIC_ATOMIC64 if !64BIT
select HAVE_ARCH_AUDITSYSCALL
select HAVE_MEMBLOCK_NODE_MAP
select HAVE_DMA_CONTIGUOUS
@ -35,7 +35,6 @@ config RISCV
select HAVE_PERF_EVENTS
select HAVE_SYSCALL_TRACEPOINTS
select IRQ_DOMAIN
select RISCV_ISA_A if SMP
select SPARSE_IRQ
select SYSCTL_EXCEPTION_TRACE
select HAVE_ARCH_TRACEHOOK
@ -195,9 +194,6 @@ config RISCV_ISA_C
If you don't know what to do here, say Y.
config RISCV_ISA_A
def_bool y
menu "supported PMU type"
depends on PERF_EVENTS

Просмотреть файл

@ -39,9 +39,8 @@ endif
KBUILD_CFLAGS += -Wall
# ISA string setting
riscv-march-$(CONFIG_ARCH_RV32I) := rv32im
riscv-march-$(CONFIG_ARCH_RV64I) := rv64im
riscv-march-$(CONFIG_RISCV_ISA_A) := $(riscv-march-y)a
riscv-march-$(CONFIG_ARCH_RV32I) := rv32ima
riscv-march-$(CONFIG_ARCH_RV64I) := rv64ima
riscv-march-$(CONFIG_FPU) := $(riscv-march-y)fd
riscv-march-$(CONFIG_RISCV_ISA_C) := $(riscv-march-y)c
KBUILD_CFLAGS += -march=$(subst fd,,$(riscv-march-y))

Просмотреть файл

@ -4,6 +4,7 @@ generic-y += compat.h
generic-y += cputime.h
generic-y += device.h
generic-y += div64.h
generic-y += extable.h
generic-y += dma.h
generic-y += dma-contiguous.h
generic-y += dma-mapping.h

Просмотреть файл

@ -21,7 +21,12 @@
#include <asm/asm.h>
#ifdef CONFIG_GENERIC_BUG
#define __BUG_INSN _AC(0x00100073, UL) /* ebreak */
#define __INSN_LENGTH_MASK _UL(0x3)
#define __INSN_LENGTH_32 _UL(0x3)
#define __COMPRESSED_INSN_MASK _UL(0xffff)
#define __BUG_INSN_32 _UL(0x00100073) /* ebreak */
#define __BUG_INSN_16 _UL(0x9002) /* c.ebreak */
#ifndef __ASSEMBLY__
typedef u32 bug_insn_t;
@ -38,38 +43,46 @@ typedef u32 bug_insn_t;
#define __BUG_ENTRY \
__BUG_ENTRY_ADDR "\n\t" \
__BUG_ENTRY_FILE "\n\t" \
RISCV_SHORT " %1"
RISCV_SHORT " %1\n\t" \
RISCV_SHORT " %2"
#else
#define __BUG_ENTRY \
__BUG_ENTRY_ADDR
__BUG_ENTRY_ADDR "\n\t" \
RISCV_SHORT " %2"
#endif
#define BUG() \
#define __BUG_FLAGS(flags) \
do { \
__asm__ __volatile__ ( \
"1:\n\t" \
"ebreak\n" \
".pushsection __bug_table,\"a\"\n\t" \
".pushsection __bug_table,\"aw\"\n\t" \
"2:\n\t" \
__BUG_ENTRY "\n\t" \
".org 2b + %2\n\t" \
".org 2b + %3\n\t" \
".popsection" \
: \
: "i" (__FILE__), "i" (__LINE__), \
"i" (sizeof(struct bug_entry))); \
unreachable(); \
"i" (flags), \
"i" (sizeof(struct bug_entry))); \
} while (0)
#endif /* !__ASSEMBLY__ */
#else /* CONFIG_GENERIC_BUG */
#ifndef __ASSEMBLY__
#define BUG() \
do { \
#define __BUG_FLAGS(flags) do { \
__asm__ __volatile__ ("ebreak\n"); \
unreachable(); \
} while (0)
#endif /* !__ASSEMBLY__ */
#endif /* CONFIG_GENERIC_BUG */
#define BUG() do { \
__BUG_FLAGS(0); \
unreachable(); \
} while (0)
#define __WARN_FLAGS(flags) __BUG_FLAGS(BUGFLAG_WARNING|(flags))
#define HAVE_ARCH_BUG
#include <asm-generic/bug.h>

Просмотреть файл

@ -47,7 +47,7 @@ static inline void flush_dcache_page(struct page *page)
#else /* CONFIG_SMP */
#define flush_icache_all() sbi_remote_fence_i(NULL)
void flush_icache_all(void);
void flush_icache_mm(struct mm_struct *mm, bool local);
#endif /* CONFIG_SMP */

Просмотреть файл

@ -14,64 +14,95 @@
#ifndef _ASM_RISCV_CSR_H
#define _ASM_RISCV_CSR_H
#include <asm/asm.h>
#include <linux/const.h>
/* Status register flags */
#define SR_SIE _AC(0x00000002, UL) /* Supervisor Interrupt Enable */
#define SR_SPIE _AC(0x00000020, UL) /* Previous Supervisor IE */
#define SR_SPP _AC(0x00000100, UL) /* Previously Supervisor */
#define SR_SUM _AC(0x00040000, UL) /* Supervisor may access User Memory */
#define SR_SIE _AC(0x00000002, UL) /* Supervisor Interrupt Enable */
#define SR_SPIE _AC(0x00000020, UL) /* Previous Supervisor IE */
#define SR_SPP _AC(0x00000100, UL) /* Previously Supervisor */
#define SR_SUM _AC(0x00040000, UL) /* Supervisor User Memory Access */
#define SR_FS _AC(0x00006000, UL) /* Floating-point Status */
#define SR_FS_OFF _AC(0x00000000, UL)
#define SR_FS_INITIAL _AC(0x00002000, UL)
#define SR_FS_CLEAN _AC(0x00004000, UL)
#define SR_FS_DIRTY _AC(0x00006000, UL)
#define SR_FS _AC(0x00006000, UL) /* Floating-point Status */
#define SR_FS_OFF _AC(0x00000000, UL)
#define SR_FS_INITIAL _AC(0x00002000, UL)
#define SR_FS_CLEAN _AC(0x00004000, UL)
#define SR_FS_DIRTY _AC(0x00006000, UL)
#define SR_XS _AC(0x00018000, UL) /* Extension Status */
#define SR_XS_OFF _AC(0x00000000, UL)
#define SR_XS_INITIAL _AC(0x00008000, UL)
#define SR_XS_CLEAN _AC(0x00010000, UL)
#define SR_XS_DIRTY _AC(0x00018000, UL)
#define SR_XS _AC(0x00018000, UL) /* Extension Status */
#define SR_XS_OFF _AC(0x00000000, UL)
#define SR_XS_INITIAL _AC(0x00008000, UL)
#define SR_XS_CLEAN _AC(0x00010000, UL)
#define SR_XS_DIRTY _AC(0x00018000, UL)
#ifndef CONFIG_64BIT
#define SR_SD _AC(0x80000000, UL) /* FS/XS dirty */
#define SR_SD _AC(0x80000000, UL) /* FS/XS dirty */
#else
#define SR_SD _AC(0x8000000000000000, UL) /* FS/XS dirty */
#define SR_SD _AC(0x8000000000000000, UL) /* FS/XS dirty */
#endif
/* SATP flags */
#if __riscv_xlen == 32
#define SATP_PPN _AC(0x003FFFFF, UL)
#define SATP_MODE_32 _AC(0x80000000, UL)
#define SATP_MODE SATP_MODE_32
#ifndef CONFIG_64BIT
#define SATP_PPN _AC(0x003FFFFF, UL)
#define SATP_MODE_32 _AC(0x80000000, UL)
#define SATP_MODE SATP_MODE_32
#else
#define SATP_PPN _AC(0x00000FFFFFFFFFFF, UL)
#define SATP_MODE_39 _AC(0x8000000000000000, UL)
#define SATP_MODE SATP_MODE_39
#define SATP_PPN _AC(0x00000FFFFFFFFFFF, UL)
#define SATP_MODE_39 _AC(0x8000000000000000, UL)
#define SATP_MODE SATP_MODE_39
#endif
/* Interrupt Enable and Interrupt Pending flags */
#define SIE_SSIE _AC(0x00000002, UL) /* Software Interrupt Enable */
#define SIE_STIE _AC(0x00000020, UL) /* Timer Interrupt Enable */
#define SIE_SEIE _AC(0x00000200, UL) /* External Interrupt Enable */
/* SCAUSE */
#define SCAUSE_IRQ_FLAG (_AC(1, UL) << (__riscv_xlen - 1))
#define EXC_INST_MISALIGNED 0
#define EXC_INST_ACCESS 1
#define EXC_BREAKPOINT 3
#define EXC_LOAD_ACCESS 5
#define EXC_STORE_ACCESS 7
#define EXC_SYSCALL 8
#define EXC_INST_PAGE_FAULT 12
#define EXC_LOAD_PAGE_FAULT 13
#define EXC_STORE_PAGE_FAULT 15
#define IRQ_U_SOFT 0
#define IRQ_S_SOFT 1
#define IRQ_M_SOFT 3
#define IRQ_U_TIMER 4
#define IRQ_S_TIMER 5
#define IRQ_M_TIMER 7
#define IRQ_U_EXT 8
#define IRQ_S_EXT 9
#define IRQ_M_EXT 11
#define EXC_INST_MISALIGNED 0
#define EXC_INST_ACCESS 1
#define EXC_BREAKPOINT 3
#define EXC_LOAD_ACCESS 5
#define EXC_STORE_ACCESS 7
#define EXC_SYSCALL 8
#define EXC_INST_PAGE_FAULT 12
#define EXC_LOAD_PAGE_FAULT 13
#define EXC_STORE_PAGE_FAULT 15
/* SIE (Interrupt Enable) and SIP (Interrupt Pending) flags */
#define SIE_SSIE (_AC(0x1, UL) << IRQ_S_SOFT)
#define SIE_STIE (_AC(0x1, UL) << IRQ_S_TIMER)
#define SIE_SEIE (_AC(0x1, UL) << IRQ_S_EXT)
#define CSR_CYCLE 0xc00
#define CSR_TIME 0xc01
#define CSR_INSTRET 0xc02
#define CSR_SSTATUS 0x100
#define CSR_SIE 0x104
#define CSR_STVEC 0x105
#define CSR_SCOUNTEREN 0x106
#define CSR_SSCRATCH 0x140
#define CSR_SEPC 0x141
#define CSR_SCAUSE 0x142
#define CSR_STVAL 0x143
#define CSR_SIP 0x144
#define CSR_SATP 0x180
#define CSR_CYCLEH 0xc80
#define CSR_TIMEH 0xc81
#define CSR_INSTRETH 0xc82
#ifndef __ASSEMBLY__
#define csr_swap(csr, val) \
({ \
unsigned long __v = (unsigned long)(val); \
__asm__ __volatile__ ("csrrw %0, " #csr ", %1" \
__asm__ __volatile__ ("csrrw %0, " __ASM_STR(csr) ", %1"\
: "=r" (__v) : "rK" (__v) \
: "memory"); \
__v; \
@ -80,7 +111,7 @@
#define csr_read(csr) \
({ \
register unsigned long __v; \
__asm__ __volatile__ ("csrr %0, " #csr \
__asm__ __volatile__ ("csrr %0, " __ASM_STR(csr) \
: "=r" (__v) : \
: "memory"); \
__v; \
@ -89,7 +120,7 @@
#define csr_write(csr, val) \
({ \
unsigned long __v = (unsigned long)(val); \
__asm__ __volatile__ ("csrw " #csr ", %0" \
__asm__ __volatile__ ("csrw " __ASM_STR(csr) ", %0" \
: : "rK" (__v) \
: "memory"); \
})
@ -97,7 +128,7 @@
#define csr_read_set(csr, val) \
({ \
unsigned long __v = (unsigned long)(val); \
__asm__ __volatile__ ("csrrs %0, " #csr ", %1" \
__asm__ __volatile__ ("csrrs %0, " __ASM_STR(csr) ", %1"\
: "=r" (__v) : "rK" (__v) \
: "memory"); \
__v; \
@ -106,7 +137,7 @@
#define csr_set(csr, val) \
({ \
unsigned long __v = (unsigned long)(val); \
__asm__ __volatile__ ("csrs " #csr ", %0" \
__asm__ __volatile__ ("csrs " __ASM_STR(csr) ", %0" \
: : "rK" (__v) \
: "memory"); \
})
@ -114,7 +145,7 @@
#define csr_read_clear(csr, val) \
({ \
unsigned long __v = (unsigned long)(val); \
__asm__ __volatile__ ("csrrc %0, " #csr ", %1" \
__asm__ __volatile__ ("csrrc %0, " __ASM_STR(csr) ", %1"\
: "=r" (__v) : "rK" (__v) \
: "memory"); \
__v; \
@ -123,7 +154,7 @@
#define csr_clear(csr, val) \
({ \
unsigned long __v = (unsigned long)(val); \
__asm__ __volatile__ ("csrc " #csr ", %0" \
__asm__ __volatile__ ("csrc " __ASM_STR(csr) ", %0" \
: : "rK" (__v) \
: "memory"); \
})

Просмотреть файл

@ -27,13 +27,7 @@
#define ELF_CLASS ELFCLASS32
#endif
#if defined(__LITTLE_ENDIAN)
#define ELF_DATA ELFDATA2LSB
#elif defined(__BIG_ENDIAN)
#define ELF_DATA ELFDATA2MSB
#else
#error "Unknown endianness"
#endif
/*
* This is used to ensure we don't load something for the wrong architecture.

Просмотреть файл

@ -7,18 +7,6 @@
#ifndef _ASM_FUTEX_H
#define _ASM_FUTEX_H
#ifndef CONFIG_RISCV_ISA_A
/*
* Use the generic interrupt disabling versions if the A extension
* is not supported.
*/
#ifdef CONFIG_SMP
#error "Can't support generic futex calls without A extension on SMP"
#endif
#include <asm-generic/futex.h>
#else /* CONFIG_RISCV_ISA_A */
#include <linux/futex.h>
#include <linux/uaccess.h>
#include <linux/errno.h>
@ -124,5 +112,4 @@ futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,
return ret;
}
#endif /* CONFIG_RISCV_ISA_A */
#endif /* _ASM_FUTEX_H */

Просмотреть файл

@ -21,25 +21,25 @@
/* read interrupt enabled status */
static inline unsigned long arch_local_save_flags(void)
{
return csr_read(sstatus);
return csr_read(CSR_SSTATUS);
}
/* unconditionally enable interrupts */
static inline void arch_local_irq_enable(void)
{
csr_set(sstatus, SR_SIE);
csr_set(CSR_SSTATUS, SR_SIE);
}
/* unconditionally disable interrupts */
static inline void arch_local_irq_disable(void)
{
csr_clear(sstatus, SR_SIE);
csr_clear(CSR_SSTATUS, SR_SIE);
}
/* get status and disable interrupts */
static inline unsigned long arch_local_irq_save(void)
{
return csr_read_clear(sstatus, SR_SIE);
return csr_read_clear(CSR_SSTATUS, SR_SIE);
}
/* test flags */
@ -57,7 +57,7 @@ static inline int arch_irqs_disabled(void)
/* set interrupt enabled status */
static inline void arch_local_irq_restore(unsigned long flags)
{
csr_set(sstatus, flags & SR_SIE);
csr_set(CSR_SSTATUS, flags & SR_SIE);
}
#endif /* _ASM_RISCV_IRQFLAGS_H */

Просмотреть файл

@ -20,8 +20,6 @@
#include <linux/mm.h>
#include <linux/sched.h>
#include <asm/tlbflush.h>
#include <asm/cacheflush.h>
static inline void enter_lazy_tlb(struct mm_struct *mm,
struct task_struct *task)
@ -39,61 +37,8 @@ static inline void destroy_context(struct mm_struct *mm)
{
}
/*
* When necessary, performs a deferred icache flush for the given MM context,
* on the local CPU. RISC-V has no direct mechanism for instruction cache
* shoot downs, so instead we send an IPI that informs the remote harts they
* need to flush their local instruction caches. To avoid pathologically slow
* behavior in a common case (a bunch of single-hart processes on a many-hart
* machine, ie 'make -j') we avoid the IPIs for harts that are not currently
* executing a MM context and instead schedule a deferred local instruction
* cache flush to be performed before execution resumes on each hart. This
* actually performs that local instruction cache flush, which implicitly only
* refers to the current hart.
*/
static inline void flush_icache_deferred(struct mm_struct *mm)
{
#ifdef CONFIG_SMP
unsigned int cpu = smp_processor_id();
cpumask_t *mask = &mm->context.icache_stale_mask;
if (cpumask_test_cpu(cpu, mask)) {
cpumask_clear_cpu(cpu, mask);
/*
* Ensure the remote hart's writes are visible to this hart.
* This pairs with a barrier in flush_icache_mm.
*/
smp_mb();
local_flush_icache_all();
}
#endif
}
static inline void switch_mm(struct mm_struct *prev,
struct mm_struct *next, struct task_struct *task)
{
if (likely(prev != next)) {
/*
* Mark the current MM context as inactive, and the next as
* active. This is at least used by the icache flushing
* routines in order to determine who should
*/
unsigned int cpu = smp_processor_id();
cpumask_clear_cpu(cpu, mm_cpumask(prev));
cpumask_set_cpu(cpu, mm_cpumask(next));
/*
* Use the old spbtr name instead of using the current satp
* name to support binutils 2.29 which doesn't know about the
* privileged ISA 1.10 yet.
*/
csr_write(sptbr, virt_to_pfn(next->pgd) | SATP_MODE);
local_flush_tlb_all();
flush_icache_deferred(next);
}
}
void switch_mm(struct mm_struct *prev, struct mm_struct *next,
struct task_struct *task);
static inline void activate_mm(struct mm_struct *prev,
struct mm_struct *next)

Просмотреть файл

@ -70,47 +70,38 @@ struct pt_regs {
/* Helpers for working with the instruction pointer */
#define GET_IP(regs) ((regs)->sepc)
#define SET_IP(regs, val) (GET_IP(regs) = (val))
static inline unsigned long instruction_pointer(struct pt_regs *regs)
{
return GET_IP(regs);
return regs->sepc;
}
static inline void instruction_pointer_set(struct pt_regs *regs,
unsigned long val)
{
SET_IP(regs, val);
regs->sepc = val;
}
#define profile_pc(regs) instruction_pointer(regs)
/* Helpers for working with the user stack pointer */
#define GET_USP(regs) ((regs)->sp)
#define SET_USP(regs, val) (GET_USP(regs) = (val))
static inline unsigned long user_stack_pointer(struct pt_regs *regs)
{
return GET_USP(regs);
return regs->sp;
}
static inline void user_stack_pointer_set(struct pt_regs *regs,
unsigned long val)
{
SET_USP(regs, val);
regs->sp = val;
}
/* Helpers for working with the frame pointer */
#define GET_FP(regs) ((regs)->s0)
#define SET_FP(regs, val) (GET_FP(regs) = (val))
static inline unsigned long frame_pointer(struct pt_regs *regs)
{
return GET_FP(regs);
return regs->s0;
}
static inline void frame_pointer_set(struct pt_regs *regs,
unsigned long val)
{
SET_FP(regs, val);
regs->s0 = val;
}
static inline unsigned long regs_return_value(struct pt_regs *regs)

Просмотреть файл

@ -26,22 +26,27 @@
#define SBI_REMOTE_SFENCE_VMA_ASID 7
#define SBI_SHUTDOWN 8
#define SBI_CALL(which, arg0, arg1, arg2) ({ \
#define SBI_CALL(which, arg0, arg1, arg2, arg3) ({ \
register uintptr_t a0 asm ("a0") = (uintptr_t)(arg0); \
register uintptr_t a1 asm ("a1") = (uintptr_t)(arg1); \
register uintptr_t a2 asm ("a2") = (uintptr_t)(arg2); \
register uintptr_t a3 asm ("a3") = (uintptr_t)(arg3); \
register uintptr_t a7 asm ("a7") = (uintptr_t)(which); \
asm volatile ("ecall" \
: "+r" (a0) \
: "r" (a1), "r" (a2), "r" (a7) \
: "r" (a1), "r" (a2), "r" (a3), "r" (a7) \
: "memory"); \
a0; \
})
/* Lazy implementations until SBI is finalized */
#define SBI_CALL_0(which) SBI_CALL(which, 0, 0, 0)
#define SBI_CALL_1(which, arg0) SBI_CALL(which, arg0, 0, 0)
#define SBI_CALL_2(which, arg0, arg1) SBI_CALL(which, arg0, arg1, 0)
#define SBI_CALL_0(which) SBI_CALL(which, 0, 0, 0, 0)
#define SBI_CALL_1(which, arg0) SBI_CALL(which, arg0, 0, 0, 0)
#define SBI_CALL_2(which, arg0, arg1) SBI_CALL(which, arg0, arg1, 0, 0)
#define SBI_CALL_3(which, arg0, arg1, arg2) \
SBI_CALL(which, arg0, arg1, arg2, 0)
#define SBI_CALL_4(which, arg0, arg1, arg2, arg3) \
SBI_CALL(which, arg0, arg1, arg2, arg3)
static inline void sbi_console_putchar(int ch)
{
@ -86,7 +91,7 @@ static inline void sbi_remote_sfence_vma(const unsigned long *hart_mask,
unsigned long start,
unsigned long size)
{
SBI_CALL_1(SBI_REMOTE_SFENCE_VMA, hart_mask);
SBI_CALL_3(SBI_REMOTE_SFENCE_VMA, hart_mask, start, size);
}
static inline void sbi_remote_sfence_vma_asid(const unsigned long *hart_mask,
@ -94,7 +99,7 @@ static inline void sbi_remote_sfence_vma_asid(const unsigned long *hart_mask,
unsigned long size,
unsigned long asid)
{
SBI_CALL_1(SBI_REMOTE_SFENCE_VMA_ASID, hart_mask);
SBI_CALL_4(SBI_REMOTE_SFENCE_VMA_ASID, hart_mask, start, size, asid);
}
#endif

Просмотреть файл

@ -0,0 +1,16 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* SiFive L2 Cache Controller header file
*
*/
#ifndef _ASM_RISCV_SIFIVE_L2_CACHE_H
#define _ASM_RISCV_SIFIVE_L2_CACHE_H
extern int register_sifive_l2_error_notifier(struct notifier_block *nb);
extern int unregister_sifive_l2_error_notifier(struct notifier_block *nb);
#define SIFIVE_L2_ERR_TYPE_CE 0
#define SIFIVE_L2_ERR_TYPE_UE 1
#endif /* _ASM_RISCV_SIFIVE_L2_CACHE_H */

Просмотреть файл

@ -28,7 +28,9 @@
#include <asm/processor.h>
#include <asm/csr.h>
typedef unsigned long mm_segment_t;
typedef struct {
unsigned long seg;
} mm_segment_t;
/*
* low level task data that entry.S needs immediate access to

Просмотреть файл

@ -23,6 +23,7 @@
#include <linux/compiler.h>
#include <linux/thread_info.h>
#include <asm/byteorder.h>
#include <asm/extable.h>
#include <asm/asm.h>
#define __enable_user_access() \
@ -38,8 +39,10 @@
* For historical reasons, these macros are grossly misnamed.
*/
#define KERNEL_DS (~0UL)
#define USER_DS (TASK_SIZE)
#define MAKE_MM_SEG(s) ((mm_segment_t) { (s) })
#define KERNEL_DS MAKE_MM_SEG(~0UL)
#define USER_DS MAKE_MM_SEG(TASK_SIZE)
#define get_fs() (current_thread_info()->addr_limit)
@ -48,9 +51,9 @@ static inline void set_fs(mm_segment_t fs)
current_thread_info()->addr_limit = fs;
}
#define segment_eq(a, b) ((a) == (b))
#define segment_eq(a, b) ((a).seg == (b).seg)
#define user_addr_max() (get_fs())
#define user_addr_max() (get_fs().seg)
/**
@ -82,7 +85,7 @@ static inline int __access_ok(unsigned long addr, unsigned long size)
{
const mm_segment_t fs = get_fs();
return (size <= fs) && (addr <= (fs - size));
return size <= fs.seg && addr <= fs.seg - size;
}
/*
@ -98,21 +101,8 @@ static inline int __access_ok(unsigned long addr, unsigned long size)
* on our cache or tlb entries.
*/
struct exception_table_entry {
unsigned long insn, fixup;
};
extern int fixup_exception(struct pt_regs *state);
#if defined(__LITTLE_ENDIAN)
#define __MSW 1
#define __LSW 0
#elif defined(__BIG_ENDIAN)
#define __MSW 0
#define __LSW 1
#else
#error "Unknown endianness"
#endif
#define __MSW 1
/*
* The "__xxx" versions of the user access functions do not verify the address

Просмотреть файл

@ -312,9 +312,6 @@ void asm_offsets(void)
- offsetof(struct task_struct, thread.fstate.f[0])
);
/* The assembler needs access to THREAD_SIZE as well. */
DEFINE(ASM_THREAD_SIZE, THREAD_SIZE);
/*
* We allocate a pt_regs on the stack when entering the kernel. This
* ensures the alignment is sane.

Просмотреть файл

@ -136,8 +136,7 @@ static void c_stop(struct seq_file *m, void *v)
static int c_show(struct seq_file *m, void *v)
{
unsigned long cpu_id = (unsigned long)v - 1;
struct device_node *node = of_get_cpu_node(cpuid_to_hartid_map(cpu_id),
NULL);
struct device_node *node = of_get_cpu_node(cpu_id, NULL);
const char *compat, *isa, *mmu;
seq_printf(m, "processor\t: %lu\n", cpu_id);

Просмотреть файл

@ -37,11 +37,11 @@
* the kernel thread pointer. If we came from the kernel, sscratch
* will contain 0, and we should continue on the current TP.
*/
csrrw tp, sscratch, tp
csrrw tp, CSR_SSCRATCH, tp
bnez tp, _save_context
_restore_kernel_tpsp:
csrr tp, sscratch
csrr tp, CSR_SSCRATCH
REG_S sp, TASK_TI_KERNEL_SP(tp)
_save_context:
REG_S sp, TASK_TI_USER_SP(tp)
@ -87,11 +87,11 @@ _save_context:
li t0, SR_SUM | SR_FS
REG_L s0, TASK_TI_USER_SP(tp)
csrrc s1, sstatus, t0
csrr s2, sepc
csrr s3, sbadaddr
csrr s4, scause
csrr s5, sscratch
csrrc s1, CSR_SSTATUS, t0
csrr s2, CSR_SEPC
csrr s3, CSR_STVAL
csrr s4, CSR_SCAUSE
csrr s5, CSR_SSCRATCH
REG_S s0, PT_SP(sp)
REG_S s1, PT_SSTATUS(sp)
REG_S s2, PT_SEPC(sp)
@ -107,8 +107,8 @@ _save_context:
.macro RESTORE_ALL
REG_L a0, PT_SSTATUS(sp)
REG_L a2, PT_SEPC(sp)
csrw sstatus, a0
csrw sepc, a2
csrw CSR_SSTATUS, a0
csrw CSR_SEPC, a2
REG_L x1, PT_RA(sp)
REG_L x3, PT_GP(sp)
@ -155,7 +155,7 @@ ENTRY(handle_exception)
* Set sscratch register to 0, so that if a recursive exception
* occurs, the exception vector knows it came from the kernel
*/
csrw sscratch, x0
csrw CSR_SSCRATCH, x0
/* Load the global pointer */
.option push
@ -248,7 +248,7 @@ resume_userspace:
* Save TP into sscratch, so we can find the kernel data structures
* again.
*/
csrw sscratch, tp
csrw CSR_SSCRATCH, tp
restore_all:
RESTORE_ALL

Просмотреть файл

@ -23,7 +23,8 @@
__INIT
ENTRY(_start)
/* Mask all interrupts */
csrw sie, zero
csrw CSR_SIE, zero
csrw CSR_SIP, zero
/* Load the global pointer */
.option push
@ -68,14 +69,10 @@ clear_bss_done:
/* Restore C environment */
la tp, init_task
sw zero, TASK_TI_CPU(tp)
la sp, init_thread_union
li a0, ASM_THREAD_SIZE
add sp, sp, a0
la sp, init_thread_union + THREAD_SIZE
/* Start the kernel */
mv a0, s0
mv a1, s1
mv a0, s1
call parse_dtb
tail start_kernel
@ -89,7 +86,7 @@ relocate:
/* Point stvec to virtual address of intruction after satp write */
la a0, 1f
add a0, a0, a1
csrw stvec, a0
csrw CSR_STVEC, a0
/* Compute satp for kernel page tables, but don't load it yet */
la a2, swapper_pg_dir
@ -99,18 +96,20 @@ relocate:
/*
* Load trampoline page directory, which will cause us to trap to
* stvec if VA != PA, or simply fall through if VA == PA
* stvec if VA != PA, or simply fall through if VA == PA. We need a
* full fence here because setup_vm() just wrote these PTEs and we need
* to ensure the new translations are in use.
*/
la a0, trampoline_pg_dir
srl a0, a0, PAGE_SHIFT
or a0, a0, a1
sfence.vma
csrw sptbr, a0
csrw CSR_SATP, a0
.align 2
1:
/* Set trap vector to spin forever to help debug */
la a0, .Lsecondary_park
csrw stvec, a0
csrw CSR_STVEC, a0
/* Reload the global pointer */
.option push
@ -118,8 +117,14 @@ relocate:
la gp, __global_pointer$
.option pop
/* Switch to kernel page tables */
csrw sptbr, a2
/*
* Switch to kernel page tables. A full fence is necessary in order to
* avoid using the trampoline translations, which are only correct for
* the first superpage. Fetching the fence is guarnteed to work
* because that first superpage is translated the same way.
*/
csrw CSR_SATP, a2
sfence.vma
ret
@ -130,7 +135,7 @@ relocate:
/* Set trap vector to spin forever to help debug */
la a3, .Lsecondary_park
csrw stvec, a3
csrw CSR_STVEC, a3
slli a3, a0, LGREG
la a1, __cpu_up_stack_pointer

Просмотреть файл

@ -14,17 +14,9 @@
/*
* Possible interrupt causes:
*/
#define INTERRUPT_CAUSE_SOFTWARE 1
#define INTERRUPT_CAUSE_TIMER 5
#define INTERRUPT_CAUSE_EXTERNAL 9
/*
* The high order bit of the trap cause register is always set for
* interrupts, which allows us to differentiate them from exceptions
* quickly. The INTERRUPT_CAUSE_* macros don't contain that bit, so we
* need to mask it off.
*/
#define INTERRUPT_CAUSE_FLAG (1UL << (__riscv_xlen - 1))
#define INTERRUPT_CAUSE_SOFTWARE IRQ_S_SOFT
#define INTERRUPT_CAUSE_TIMER IRQ_S_TIMER
#define INTERRUPT_CAUSE_EXTERNAL IRQ_S_EXT
int arch_show_interrupts(struct seq_file *p, int prec)
{
@ -37,7 +29,7 @@ asmlinkage void __irq_entry do_IRQ(struct pt_regs *regs)
struct pt_regs *old_regs = set_irq_regs(regs);
irq_enter();
switch (regs->scause & ~INTERRUPT_CAUSE_FLAG) {
switch (regs->scause & ~SCAUSE_IRQ_FLAG) {
case INTERRUPT_CAUSE_TIMER:
riscv_timer_interrupt();
break;
@ -54,7 +46,8 @@ asmlinkage void __irq_entry do_IRQ(struct pt_regs *regs)
handle_arch_irq(regs);
break;
default:
panic("unexpected interrupt cause");
pr_alert("unexpected interrupt cause 0x%lx", regs->scause);
BUG();
}
irq_exit();

Просмотреть файл

@ -185,10 +185,10 @@ static inline u64 read_counter(int idx)
switch (idx) {
case RISCV_PMU_CYCLE:
val = csr_read(cycle);
val = csr_read(CSR_CYCLE);
break;
case RISCV_PMU_INSTRET:
val = csr_read(instret);
val = csr_read(CSR_INSTRET);
break;
default:
WARN_ON_ONCE(idx < 0 || idx > RISCV_MAX_COUNTERS);

Просмотреть файл

@ -12,11 +12,15 @@
*/
#include <linux/reboot.h>
#include <linux/export.h>
#include <asm/sbi.h>
void (*pm_power_off)(void) = machine_power_off;
EXPORT_SYMBOL(pm_power_off);
static void default_power_off(void)
{
sbi_shutdown();
while (1);
}
void (*pm_power_off)(void) = default_power_off;
void machine_restart(char *cmd)
{
@ -26,11 +30,10 @@ void machine_restart(char *cmd)
void machine_halt(void)
{
machine_power_off();
pm_power_off();
}
void machine_power_off(void)
{
sbi_shutdown();
while (1);
pm_power_off();
}

Просмотреть файл

@ -52,9 +52,11 @@ struct screen_info screen_info = {
atomic_t hart_lottery;
unsigned long boot_cpu_hartid;
void __init parse_dtb(unsigned int hartid, void *dtb)
void __init parse_dtb(phys_addr_t dtb_phys)
{
if (early_init_dt_scan(__va(dtb)))
void *dtb = __va(dtb_phys);
if (early_init_dt_scan(dtb))
return;
pr_err("No DTB passed to the kernel\n");

Просмотреть файл

@ -234,6 +234,9 @@ static void handle_signal(struct ksignal *ksig, struct pt_regs *regs)
/* Are we from a system call? */
if (regs->scause == EXC_SYSCALL) {
/* Avoid additional syscall restarting via ret_from_exception */
regs->scause = -1UL;
/* If so, check system call restarting.. */
switch (regs->a0) {
case -ERESTART_RESTARTBLOCK:
@ -272,6 +275,9 @@ static void do_signal(struct pt_regs *regs)
/* Did we come from a system call? */
if (regs->scause == EXC_SYSCALL) {
/* Avoid additional syscall restarting via ret_from_exception */
regs->scause = -1UL;
/* Restart the system call - no handlers present */
switch (regs->a0) {
case -ERESTARTNOHAND:

Просмотреть файл

@ -42,7 +42,7 @@ unsigned long __cpuid_to_hartid_map[NR_CPUS] = {
void __init smp_setup_processor_id(void)
{
cpuid_to_hartid_map(0) = boot_cpu_hartid;
cpuid_to_hartid_map(0) = boot_cpu_hartid;
}
/* A collection of single bit ipi messages. */
@ -53,7 +53,7 @@ static struct {
int riscv_hartid_to_cpuid(int hartid)
{
int i = -1;
int i;
for (i = 0; i < NR_CPUS; i++)
if (cpuid_to_hartid_map(i) == hartid)
@ -70,6 +70,12 @@ void riscv_cpuid_to_hartid_mask(const struct cpumask *in, struct cpumask *out)
for_each_cpu(cpu, in)
cpumask_set_cpu(cpuid_to_hartid_map(cpu), out);
}
bool arch_match_cpu_phys_id(int cpu, u64 phys_id)
{
return phys_id == cpuid_to_hartid_map(cpu);
}
/* Unsupported */
int setup_profiling_timer(unsigned int multiplier)
{
@ -89,7 +95,7 @@ void riscv_software_interrupt(void)
unsigned long *stats = ipi_data[smp_processor_id()].stats;
/* Clear pending IPI */
csr_clear(sip, SIE_SSIE);
csr_clear(CSR_SIP, SIE_SSIE);
while (true) {
unsigned long ops;
@ -199,52 +205,3 @@ void smp_send_reschedule(int cpu)
send_ipi_message(cpumask_of(cpu), IPI_RESCHEDULE);
}
/*
* Performs an icache flush for the given MM context. RISC-V has no direct
* mechanism for instruction cache shoot downs, so instead we send an IPI that
* informs the remote harts they need to flush their local instruction caches.
* To avoid pathologically slow behavior in a common case (a bunch of
* single-hart processes on a many-hart machine, ie 'make -j') we avoid the
* IPIs for harts that are not currently executing a MM context and instead
* schedule a deferred local instruction cache flush to be performed before
* execution resumes on each hart.
*/
void flush_icache_mm(struct mm_struct *mm, bool local)
{
unsigned int cpu;
cpumask_t others, hmask, *mask;
preempt_disable();
/* Mark every hart's icache as needing a flush for this MM. */
mask = &mm->context.icache_stale_mask;
cpumask_setall(mask);
/* Flush this hart's I$ now, and mark it as flushed. */
cpu = smp_processor_id();
cpumask_clear_cpu(cpu, mask);
local_flush_icache_all();
/*
* Flush the I$ of other harts concurrently executing, and mark them as
* flushed.
*/
cpumask_andnot(&others, mm_cpumask(mm), cpumask_of(cpu));
local |= cpumask_empty(&others);
if (mm != current->active_mm || !local) {
cpumask_clear(&hmask);
riscv_cpuid_to_hartid_mask(&others, &hmask);
sbi_remote_fence_i(hmask.bits);
} else {
/*
* It's assumed that at least one strongly ordered operation is
* performed on this hart between setting a hart's cpumask bit
* and scheduling this MM context on that hart. Sending an SBI
* remote message will do this, but in the case where no
* messages are sent we still need to order this hart's writes
* with flush_icache_deferred().
*/
smp_mb();
}
preempt_enable();
}

Просмотреть файл

@ -47,6 +47,17 @@ void __init smp_prepare_boot_cpu(void)
void __init smp_prepare_cpus(unsigned int max_cpus)
{
int cpuid;
/* This covers non-smp usecase mandated by "nosmp" option */
if (max_cpus == 0)
return;
for_each_possible_cpu(cpuid) {
if (cpuid == smp_processor_id())
continue;
set_cpu_present(cpuid, true);
}
}
void __init setup_smp(void)
@ -73,12 +84,19 @@ void __init setup_smp(void)
}
cpuid_to_hartid_map(cpuid) = hart;
set_cpu_possible(cpuid, true);
set_cpu_present(cpuid, true);
cpuid++;
}
BUG_ON(!found_boot_cpu);
if (cpuid > nr_cpu_ids)
pr_warn("Total number of cpus [%d] is greater than nr_cpus option value [%d]\n",
cpuid, nr_cpu_ids);
for (cpuid = 1; cpuid < nr_cpu_ids; cpuid++) {
if (cpuid_to_hartid_map(cpuid) != INVALID_HARTID)
set_cpu_possible(cpuid, true);
}
}
int __cpu_up(unsigned int cpu, struct task_struct *tidle)

Просмотреть файл

@ -33,9 +33,9 @@ static void notrace walk_stackframe(struct task_struct *task,
unsigned long fp, sp, pc;
if (regs) {
fp = GET_FP(regs);
sp = GET_USP(regs);
pc = GET_IP(regs);
fp = frame_pointer(regs);
sp = user_stack_pointer(regs);
pc = instruction_pointer(regs);
} else if (task == NULL || task == current) {
const register unsigned long current_sp __asm__ ("sp");
fp = (unsigned long)__builtin_frame_address(0);
@ -64,12 +64,8 @@ static void notrace walk_stackframe(struct task_struct *task,
frame = (struct stackframe *)fp - 1;
sp = fp;
fp = frame->fp;
#ifdef HAVE_FUNCTION_GRAPH_RET_ADDR_PTR
pc = ftrace_graph_ret_addr(current, NULL, frame->ra,
(unsigned long *)(fp - 8));
#else
pc = frame->ra - 0x4;
#endif
}
}
@ -82,8 +78,8 @@ static void notrace walk_stackframe(struct task_struct *task,
unsigned long *ksp;
if (regs) {
sp = GET_USP(regs);
pc = GET_IP(regs);
sp = user_stack_pointer(regs);
pc = instruction_pointer(regs);
} else if (task == NULL || task == current) {
const register unsigned long current_sp __asm__ ("sp");
sp = current_sp;

Просмотреть файл

@ -70,7 +70,7 @@ void do_trap(struct pt_regs *regs, int signo, int code,
&& printk_ratelimit()) {
pr_info("%s[%d]: unhandled signal %d code 0x%x at 0x" REG_FMT,
tsk->comm, task_pid_nr(tsk), signo, code, addr);
print_vma_addr(KERN_CONT " in ", GET_IP(regs));
print_vma_addr(KERN_CONT " in ", instruction_pointer(regs));
pr_cont("\n");
show_regs(regs);
}
@ -118,6 +118,17 @@ DO_ERROR_INFO(do_trap_ecall_s,
DO_ERROR_INFO(do_trap_ecall_m,
SIGILL, ILL_ILLTRP, "environment call from M-mode");
#ifdef CONFIG_GENERIC_BUG
static inline unsigned long get_break_insn_length(unsigned long pc)
{
bug_insn_t insn;
if (probe_kernel_address((bug_insn_t *)pc, insn))
return 0;
return (((insn & __INSN_LENGTH_MASK) == __INSN_LENGTH_32) ? 4UL : 2UL);
}
#endif /* CONFIG_GENERIC_BUG */
asmlinkage void do_trap_break(struct pt_regs *regs)
{
#ifdef CONFIG_GENERIC_BUG
@ -129,8 +140,8 @@ asmlinkage void do_trap_break(struct pt_regs *regs)
case BUG_TRAP_TYPE_NONE:
break;
case BUG_TRAP_TYPE_WARN:
regs->sepc += sizeof(bug_insn_t);
return;
regs->sepc += get_break_insn_length(regs->sepc);
break;
case BUG_TRAP_TYPE_BUG:
die(regs, "Kernel BUG");
}
@ -145,11 +156,14 @@ int is_valid_bugaddr(unsigned long pc)
{
bug_insn_t insn;
if (pc < PAGE_OFFSET)
if (pc < VMALLOC_START)
return 0;
if (probe_kernel_address((bug_insn_t *)pc, insn))
return 0;
return (insn == __BUG_INSN);
if ((insn & __INSN_LENGTH_MASK) == __INSN_LENGTH_32)
return (insn == __BUG_INSN_32);
else
return ((insn & __COMPRESSED_INSN_MASK) == __BUG_INSN_16);
}
#endif /* CONFIG_GENERIC_BUG */
@ -159,9 +173,9 @@ void __init trap_init(void)
* Set sup0 scratch register to 0, indicating to exception vector
* that we are presently executing in the kernel
*/
csr_write(sscratch, 0);
csr_write(CSR_SSCRATCH, 0);
/* Set the exception vector address */
csr_write(stvec, &handle_exception);
csr_write(CSR_STVEC, &handle_exception);
/* Enable all interrupts */
csr_write(sie, -1);
csr_write(CSR_SIE, -1);
}

Просмотреть файл

@ -36,7 +36,7 @@ $(obj)/vdso.so.dbg: $(src)/vdso.lds $(obj-vdso) FORCE
# these symbols in the kernel code rather than hand-coded addresses.
SYSCFLAGS_vdso.so.dbg = -shared -s -Wl,-soname=linux-vdso.so.1 \
$(call cc-ldoption, -Wl$(comma)--hash-style=both)
-Wl,--hash-style=both
$(obj)/vdso-dummy.o: $(src)/vdso.lds $(obj)/rt_sigreturn.o FORCE
$(call if_changed,vdsold)

Просмотреть файл

@ -9,3 +9,5 @@ obj-y += fault.o
obj-y += extable.o
obj-y += ioremap.o
obj-y += cacheflush.o
obj-y += context.o
obj-y += sifive_l2_cache.o

Просмотреть файл

@ -14,6 +14,67 @@
#include <asm/pgtable.h>
#include <asm/cacheflush.h>
#ifdef CONFIG_SMP
#include <asm/sbi.h>
void flush_icache_all(void)
{
sbi_remote_fence_i(NULL);
}
/*
* Performs an icache flush for the given MM context. RISC-V has no direct
* mechanism for instruction cache shoot downs, so instead we send an IPI that
* informs the remote harts they need to flush their local instruction caches.
* To avoid pathologically slow behavior in a common case (a bunch of
* single-hart processes on a many-hart machine, ie 'make -j') we avoid the
* IPIs for harts that are not currently executing a MM context and instead
* schedule a deferred local instruction cache flush to be performed before
* execution resumes on each hart.
*/
void flush_icache_mm(struct mm_struct *mm, bool local)
{
unsigned int cpu;
cpumask_t others, hmask, *mask;
preempt_disable();
/* Mark every hart's icache as needing a flush for this MM. */
mask = &mm->context.icache_stale_mask;
cpumask_setall(mask);
/* Flush this hart's I$ now, and mark it as flushed. */
cpu = smp_processor_id();
cpumask_clear_cpu(cpu, mask);
local_flush_icache_all();
/*
* Flush the I$ of other harts concurrently executing, and mark them as
* flushed.
*/
cpumask_andnot(&others, mm_cpumask(mm), cpumask_of(cpu));
local |= cpumask_empty(&others);
if (mm != current->active_mm || !local) {
cpumask_clear(&hmask);
riscv_cpuid_to_hartid_mask(&others, &hmask);
sbi_remote_fence_i(hmask.bits);
} else {
/*
* It's assumed that at least one strongly ordered operation is
* performed on this hart between setting a hart's cpumask bit
* and scheduling this MM context on that hart. Sending an SBI
* remote message will do this, but in the case where no
* messages are sent we still need to order this hart's writes
* with flush_icache_deferred().
*/
smp_mb();
}
preempt_enable();
}
#endif /* CONFIG_SMP */
void flush_icache_pte(pte_t pte)
{
struct page *page = pte_page(pte);

69
arch/riscv/mm/context.c Normal file
Просмотреть файл

@ -0,0 +1,69 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (C) 2012 Regents of the University of California
* Copyright (C) 2017 SiFive
*/
#include <linux/mm.h>
#include <asm/tlbflush.h>
#include <asm/cacheflush.h>
/*
* When necessary, performs a deferred icache flush for the given MM context,
* on the local CPU. RISC-V has no direct mechanism for instruction cache
* shoot downs, so instead we send an IPI that informs the remote harts they
* need to flush their local instruction caches. To avoid pathologically slow
* behavior in a common case (a bunch of single-hart processes on a many-hart
* machine, ie 'make -j') we avoid the IPIs for harts that are not currently
* executing a MM context and instead schedule a deferred local instruction
* cache flush to be performed before execution resumes on each hart. This
* actually performs that local instruction cache flush, which implicitly only
* refers to the current hart.
*/
static inline void flush_icache_deferred(struct mm_struct *mm)
{
#ifdef CONFIG_SMP
unsigned int cpu = smp_processor_id();
cpumask_t *mask = &mm->context.icache_stale_mask;
if (cpumask_test_cpu(cpu, mask)) {
cpumask_clear_cpu(cpu, mask);
/*
* Ensure the remote hart's writes are visible to this hart.
* This pairs with a barrier in flush_icache_mm.
*/
smp_mb();
local_flush_icache_all();
}
#endif
}
void switch_mm(struct mm_struct *prev, struct mm_struct *next,
struct task_struct *task)
{
unsigned int cpu;
if (unlikely(prev == next))
return;
/*
* Mark the current MM context as inactive, and the next as
* active. This is at least used by the icache flushing
* routines in order to determine who should be flushed.
*/
cpu = smp_processor_id();
cpumask_clear_cpu(cpu, mm_cpumask(prev));
cpumask_set_cpu(cpu, mm_cpumask(next));
/*
* Use the old spbtr name instead of using the current satp
* name to support binutils 2.29 which doesn't know about the
* privileged ISA 1.10 yet.
*/
csr_write(sptbr, virt_to_pfn(next->pgd) | SATP_MODE);
local_flush_tlb_all();
flush_icache_deferred(next);
}

Просмотреть файл

@ -229,8 +229,9 @@ vmalloc_fault:
pte_t *pte_k;
int index;
/* User mode accesses just cause a SIGSEGV */
if (user_mode(regs))
goto bad_area;
return do_trap(regs, SIGSEGV, code, addr, tsk);
/*
* Synchronize this task's top level page-table
@ -239,13 +240,9 @@ vmalloc_fault:
* Do _not_ use "tsk->active_mm->pgd" here.
* We might be inside an interrupt in the middle
* of a task switch.
*
* Note: Use the old spbtr name instead of using the current
* satp name to support binutils 2.29 which doesn't know about
* the privileged ISA 1.10 yet.
*/
index = pgd_index(addr);
pgd = (pgd_t *)pfn_to_virt(csr_read(sptbr)) + index;
pgd = (pgd_t *)pfn_to_virt(csr_read(CSR_SATP)) + index;
pgd_k = init_mm.pgd + index;
if (!pgd_present(*pgd_k))

Просмотреть файл

@ -0,0 +1,175 @@
// SPDX-License-Identifier: GPL-2.0
/*
* SiFive L2 cache controller Driver
*
* Copyright (C) 2018-2019 SiFive, Inc.
*
*/
#include <linux/debugfs.h>
#include <linux/interrupt.h>
#include <linux/of_irq.h>
#include <linux/of_address.h>
#include <asm/sifive_l2_cache.h>
#define SIFIVE_L2_DIRECCFIX_LOW 0x100
#define SIFIVE_L2_DIRECCFIX_HIGH 0x104
#define SIFIVE_L2_DIRECCFIX_COUNT 0x108
#define SIFIVE_L2_DATECCFIX_LOW 0x140
#define SIFIVE_L2_DATECCFIX_HIGH 0x144
#define SIFIVE_L2_DATECCFIX_COUNT 0x148
#define SIFIVE_L2_DATECCFAIL_LOW 0x160
#define SIFIVE_L2_DATECCFAIL_HIGH 0x164
#define SIFIVE_L2_DATECCFAIL_COUNT 0x168
#define SIFIVE_L2_CONFIG 0x00
#define SIFIVE_L2_WAYENABLE 0x08
#define SIFIVE_L2_ECCINJECTERR 0x40
#define SIFIVE_L2_MAX_ECCINTR 3
static void __iomem *l2_base;
static int g_irq[SIFIVE_L2_MAX_ECCINTR];
enum {
DIR_CORR = 0,
DATA_CORR,
DATA_UNCORR,
};
#ifdef CONFIG_DEBUG_FS
static struct dentry *sifive_test;
static ssize_t l2_write(struct file *file, const char __user *data,
size_t count, loff_t *ppos)
{
unsigned int val;
if (kstrtouint_from_user(data, count, 0, &val))
return -EINVAL;
if ((val >= 0 && val < 0xFF) || (val >= 0x10000 && val < 0x100FF))
writel(val, l2_base + SIFIVE_L2_ECCINJECTERR);
else
return -EINVAL;
return count;
}
static const struct file_operations l2_fops = {
.owner = THIS_MODULE,
.open = simple_open,
.write = l2_write
};
static void setup_sifive_debug(void)
{
sifive_test = debugfs_create_dir("sifive_l2_cache", NULL);
debugfs_create_file("sifive_debug_inject_error", 0200,
sifive_test, NULL, &l2_fops);
}
#endif
static void l2_config_read(void)
{
u32 regval, val;
regval = readl(l2_base + SIFIVE_L2_CONFIG);
val = regval & 0xFF;
pr_info("L2CACHE: No. of Banks in the cache: %d\n", val);
val = (regval & 0xFF00) >> 8;
pr_info("L2CACHE: No. of ways per bank: %d\n", val);
val = (regval & 0xFF0000) >> 16;
pr_info("L2CACHE: Sets per bank: %llu\n", (uint64_t)1 << val);
val = (regval & 0xFF000000) >> 24;
pr_info("L2CACHE: Bytes per cache block: %llu\n", (uint64_t)1 << val);
regval = readl(l2_base + SIFIVE_L2_WAYENABLE);
pr_info("L2CACHE: Index of the largest way enabled: %d\n", regval);
}
static const struct of_device_id sifive_l2_ids[] = {
{ .compatible = "sifive,fu540-c000-ccache" },
{ /* end of table */ },
};
static ATOMIC_NOTIFIER_HEAD(l2_err_chain);
int register_sifive_l2_error_notifier(struct notifier_block *nb)
{
return atomic_notifier_chain_register(&l2_err_chain, nb);
}
EXPORT_SYMBOL_GPL(register_sifive_l2_error_notifier);
int unregister_sifive_l2_error_notifier(struct notifier_block *nb)
{
return atomic_notifier_chain_unregister(&l2_err_chain, nb);
}
EXPORT_SYMBOL_GPL(unregister_sifive_l2_error_notifier);
static irqreturn_t l2_int_handler(int irq, void *device)
{
unsigned int regval, add_h, add_l;
if (irq == g_irq[DIR_CORR]) {
add_h = readl(l2_base + SIFIVE_L2_DIRECCFIX_HIGH);
add_l = readl(l2_base + SIFIVE_L2_DIRECCFIX_LOW);
pr_err("L2CACHE: DirError @ 0x%08X.%08X\n", add_h, add_l);
regval = readl(l2_base + SIFIVE_L2_DIRECCFIX_COUNT);
atomic_notifier_call_chain(&l2_err_chain, SIFIVE_L2_ERR_TYPE_CE,
"DirECCFix");
}
if (irq == g_irq[DATA_CORR]) {
add_h = readl(l2_base + SIFIVE_L2_DATECCFIX_HIGH);
add_l = readl(l2_base + SIFIVE_L2_DATECCFIX_LOW);
pr_err("L2CACHE: DataError @ 0x%08X.%08X\n", add_h, add_l);
regval = readl(l2_base + SIFIVE_L2_DATECCFIX_COUNT);
atomic_notifier_call_chain(&l2_err_chain, SIFIVE_L2_ERR_TYPE_CE,
"DatECCFix");
}
if (irq == g_irq[DATA_UNCORR]) {
add_h = readl(l2_base + SIFIVE_L2_DATECCFAIL_HIGH);
add_l = readl(l2_base + SIFIVE_L2_DATECCFAIL_LOW);
pr_err("L2CACHE: DataFail @ 0x%08X.%08X\n", add_h, add_l);
regval = readl(l2_base + SIFIVE_L2_DATECCFAIL_COUNT);
atomic_notifier_call_chain(&l2_err_chain, SIFIVE_L2_ERR_TYPE_UE,
"DatECCFail");
}
return IRQ_HANDLED;
}
int __init sifive_l2_init(void)
{
struct device_node *np;
struct resource res;
int i, rc;
np = of_find_matching_node(NULL, sifive_l2_ids);
if (!np)
return -ENODEV;
if (of_address_to_resource(np, 0, &res))
return -ENODEV;
l2_base = ioremap(res.start, resource_size(&res));
if (!l2_base)
return -ENOMEM;
for (i = 0; i < SIFIVE_L2_MAX_ECCINTR; i++) {
g_irq[i] = irq_of_parse_and_map(np, i);
rc = request_irq(g_irq[i], l2_int_handler, 0, "l2_ecc", NULL);
if (rc) {
pr_err("L2CACHE: Could not request IRQ %d\n", g_irq[i]);
return rc;
}
}
l2_config_read();
#ifdef CONFIG_DEBUG_FS
setup_sifive_debug();
#endif
return 0;
}
device_initcall(sifive_l2_init);

Просмотреть файл

@ -53,7 +53,6 @@ device_initcall(hvc_sbi_init);
static int __init hvc_sbi_console_init(void)
{
hvc_instantiate(0, 0, &hvc_sbi_ops);
add_preferred_console("hvc", 0, NULL);
return 0;
}