RISC-V Updates for the 4.18 Merge Window

This tag contains some small RISC-V updates I'd like to target for 4.18.
 They are all fairly small this time.  Here's a short summary, there's
 more info in the commits/merges.
 
 * A fix to __clear_user to respect the passed arguments.
 * Enough support for the perf subsystem to work with RISC-V's ISA
   defined performance counters.
 * Support for sparse and cleanups suggested by it.
 * Support for R_RISCV_32 (a relocation, not the 32-bit ISA).
 * Some MAINTAINERS cleanups.
 * The addition of CONFIG_HVC_RISCV_SBI to our defconfig, as it's always
   present.
 
 I've given these a simple build+boot test.
 -----BEGIN PGP SIGNATURE-----
 
 iQJHBAABCAAxFiEEAM520YNJYN/OiG3470yhUCzLq0EFAlsezeQTHHBhbG1lckBk
 YWJiZWx0LmNvbQAKCRDvTKFQLMurQSjWD/999J5HKkHQfHJi4/RQh0SEFNnRv32U
 1O7zwqA5PkeoxWqq1y+dKcPGwZTZncwWp8yn8xipVYTYmYgGNZj9CYdEYkO119y5
 OVcyUZdHlSdXgKkpVDJ0+MrZ60LY6tS66b6oJqQKmB/N4rYvu5L6ctRtyHRQe4nb
 rXNVbnaouiwrFJs9iZaCyaaGAGXKg81C5xCvvr8P0CYzVD4Jx+AmD6c7GohGWJS3
 PLttEsmPaiaV9pzWK18yeFLaIgAqNEo2/s7/QsR1sHo4dUEJyFu9nMHvmjlJucCu
 imrkRwlhsCKxa4ob9D6UPh0qBXDmbSQA1U6M9RKY1jdt7Gul5eMuZIz2r/45752D
 z3YCITgTih2dzWO0zw4GCicCcJoD39IVTrsRCIqxF7jmAfBV8s/U+irMjEMkYhz6
 wKVM9L3/6Z4bPAEztqKMjSw1/nNSavyn6wWACYIj5SDtFqNkvxwSahg1nGTuuoNx
 JqRa0r+lxsbunSwpEVtRROLYE3ZYa/KPrKhKEr+7vkiPlNNv82TZ5T/wkObhVfKp
 S6tZ0/Wb0tchm//LurfLAOV4aZxDlOLfukZ3eSdgBqSfILA3T98zS9FpuCmjiUfG
 6S5GdNvjc3jLLZvPwtofIZmKUSpRSj545J1RRkEcgoEFjXdzxOWFLETsIOIt4nFf
 7Nwjw/Th1hzuyQ==
 =D+sb
 -----END PGP SIGNATURE-----

Merge tag 'riscv-for-linus-4.18-merge_window' of git://git.kernel.org/pub/scm/linux/kernel/git/palmer/riscv-linux

Pull RISC-V updates from Palmer Dabbelt:
 "This contains some small RISC-V updates I'd like to target for 4.18.

  They are all fairly small this time. Here's a short summary, there's
  more info in the commits/merges:

   - a fix to __clear_user to respect the passed arguments.

   - enough support for the perf subsystem to work with RISC-V's ISA
     defined performance counters.

   - support for sparse and cleanups suggested by it.

   - support for R_RISCV_32 (a relocation, not the 32-bit ISA).

   - some MAINTAINERS cleanups.

   - the addition of CONFIG_HVC_RISCV_SBI to our defconfig, as it's
     always present.

  I've given these a simple build+boot test"

* tag 'riscv-for-linus-4.18-merge_window' of git://git.kernel.org/pub/scm/linux/kernel/git/palmer/riscv-linux:
  RISC-V: Add CONFIG_HVC_RISCV_SBI=y to defconfig
  RISC-V: Handle R_RISCV_32 in modules
  riscv/ftrace: Export _mcount when DYNAMIC_FTRACE isn't set
  riscv: add riscv-specific predefines to CHECKFLAGS
  riscv: split the declaration of __copy_user
  riscv: no __user for probe_kernel_address()
  riscv: use NULL instead of a plain 0
  perf: riscv: Add Document for Future Porting Guide
  perf: riscv: preliminary RISC-V support
  MAINTAINERS: Update Albert's email, he's back at Berkeley
  MAINTAINERS: Add myself as a maintainer for SiFive's drivers
  riscv: Fix the bug in memory access fixup code
This commit is contained in:
Linus Torvalds 2018-06-16 06:42:43 +09:00
Родитель 8949170cf4 24a130ccfe
Коммит 6a4d4b3253
17 изменённых файлов: 884 добавлений и 15 удалений

249
Documentation/riscv/pmu.txt Normal file
Просмотреть файл

@ -0,0 +1,249 @@
Supporting PMUs on RISC-V platforms
==========================================
Alan Kao <alankao@andestech.com>, Mar 2018
Introduction
------------
As of this writing, perf_event-related features mentioned in The RISC-V ISA
Privileged Version 1.10 are as follows:
(please check the manual for more details)
* [m|s]counteren
* mcycle[h], cycle[h]
* minstret[h], instret[h]
* mhpeventx, mhpcounterx[h]
With such function set only, porting perf would require a lot of work, due to
the lack of the following general architectural performance monitoring features:
* Enabling/Disabling counters
Counters are just free-running all the time in our case.
* Interrupt caused by counter overflow
No such feature in the spec.
* Interrupt indicator
It is not possible to have many interrupt ports for all counters, so an
interrupt indicator is required for software to tell which counter has
just overflowed.
* Writing to counters
There will be an SBI to support this since the kernel cannot modify the
counters [1]. Alternatively, some vendor considers to implement
hardware-extension for M-S-U model machines to write counters directly.
This document aims to provide developers a quick guide on supporting their
PMUs in the kernel. The following sections briefly explain perf' mechanism
and todos.
You may check previous discussions here [1][2]. Also, it might be helpful
to check the appendix for related kernel structures.
1. Initialization
-----------------
*riscv_pmu* is a global pointer of type *struct riscv_pmu*, which contains
various methods according to perf's internal convention and PMU-specific
parameters. One should declare such instance to represent the PMU. By default,
*riscv_pmu* points to a constant structure *riscv_base_pmu*, which has very
basic support to a baseline QEMU model.
Then he/she can either assign the instance's pointer to *riscv_pmu* so that
the minimal and already-implemented logic can be leveraged, or invent his/her
own *riscv_init_platform_pmu* implementation.
In other words, existing sources of *riscv_base_pmu* merely provide a
reference implementation. Developers can flexibly decide how many parts they
can leverage, and in the most extreme case, they can customize every function
according to their needs.
2. Event Initialization
-----------------------
When a user launches a perf command to monitor some events, it is first
interpreted by the userspace perf tool into multiple *perf_event_open*
system calls, and then each of them calls to the body of *event_init*
member function that was assigned in the previous step. In *riscv_base_pmu*'s
case, it is *riscv_event_init*.
The main purpose of this function is to translate the event provided by user
into bitmap, so that HW-related control registers or counters can directly be
manipulated. The translation is based on the mappings and methods provided in
*riscv_pmu*.
Note that some features can be done in this stage as well:
(1) interrupt setting, which is stated in the next section;
(2) privilege level setting (user space only, kernel space only, both);
(3) destructor setting. Normally it is sufficient to apply *riscv_destroy_event*;
(4) tweaks for non-sampling events, which will be utilized by functions such as
*perf_adjust_period*, usually something like the follows:
if (!is_sampling_event(event)) {
hwc->sample_period = x86_pmu.max_period;
hwc->last_period = hwc->sample_period;
local64_set(&hwc->period_left, hwc->sample_period);
}
In the case of *riscv_base_pmu*, only (3) is provided for now.
3. Interrupt
------------
3.1. Interrupt Initialization
This often occurs at the beginning of the *event_init* method. In common
practice, this should be a code segment like
int x86_reserve_hardware(void)
{
int err = 0;
if (!atomic_inc_not_zero(&pmc_refcount)) {
mutex_lock(&pmc_reserve_mutex);
if (atomic_read(&pmc_refcount) == 0) {
if (!reserve_pmc_hardware())
err = -EBUSY;
else
reserve_ds_buffers();
}
if (!err)
atomic_inc(&pmc_refcount);
mutex_unlock(&pmc_reserve_mutex);
}
return err;
}
And the magic is in *reserve_pmc_hardware*, which usually does atomic
operations to make implemented IRQ accessible from some global function pointer.
*release_pmc_hardware* serves the opposite purpose, and it is used in event
destructors mentioned in previous section.
(Note: From the implementations in all the architectures, the *reserve/release*
pair are always IRQ settings, so the *pmc_hardware* seems somehow misleading.
It does NOT deal with the binding between an event and a physical counter,
which will be introduced in the next section.)
3.2. IRQ Structure
Basically, a IRQ runs the following pseudo code:
for each hardware counter that triggered this overflow
get the event of this counter
// following two steps are defined as *read()*,
// check the section Reading/Writing Counters for details.
count the delta value since previous interrupt
update the event->count (# event occurs) by adding delta, and
event->hw.period_left by subtracting delta
if the event overflows
sample data
set the counter appropriately for the next overflow
if the event overflows again
too frequently, throttle this event
fi
fi
end for
However as of this writing, none of the RISC-V implementations have designed an
interrupt for perf, so the details are to be completed in the future.
4. Reading/Writing Counters
---------------------------
They seem symmetric but perf treats them quite differently. For reading, there
is a *read* interface in *struct pmu*, but it serves more than just reading.
According to the context, the *read* function not only reads the content of the
counter (event->count), but also updates the left period to the next interrupt
(event->hw.period_left).
But the core of perf does not need direct write to counters. Writing counters
is hidden behind the abstraction of 1) *pmu->start*, literally start counting so one
has to set the counter to a good value for the next interrupt; 2) inside the IRQ
it should set the counter to the same resonable value.
Reading is not a problem in RISC-V but writing would need some effort, since
counters are not allowed to be written by S-mode.
5. add()/del()/start()/stop()
-----------------------------
Basic idea: add()/del() adds/deletes events to/from a PMU, and start()/stop()
starts/stop the counter of some event in the PMU. All of them take the same
arguments: *struct perf_event *event* and *int flag*.
Consider perf as a state machine, then you will find that these functions serve
as the state transition process between those states.
Three states (event->hw.state) are defined:
* PERF_HES_STOPPED: the counter is stopped
* PERF_HES_UPTODATE: the event->count is up-to-date
* PERF_HES_ARCH: arch-dependent usage ... we don't need this for now
A normal flow of these state transitions are as follows:
* A user launches a perf event, resulting in calling to *event_init*.
* When being context-switched in, *add* is called by the perf core, with a flag
PERF_EF_START, which means that the event should be started after it is added.
At this stage, a general event is bound to a physical counter, if any.
The state changes to PERF_HES_STOPPED and PERF_HES_UPTODATE, because it is now
stopped, and the (software) event count does not need updating.
** *start* is then called, and the counter is enabled.
With flag PERF_EF_RELOAD, it writes an appropriate value to the counter (check
previous section for detail).
Nothing is written if the flag does not contain PERF_EF_RELOAD.
The state now is reset to none, because it is neither stopped nor updated
(the counting already started)
* When being context-switched out, *del* is called. It then checks out all the
events in the PMU and calls *stop* to update their counts.
** *stop* is called by *del*
and the perf core with flag PERF_EF_UPDATE, and it often shares the same
subroutine as *read* with the same logic.
The state changes to PERF_HES_STOPPED and PERF_HES_UPTODATE, again.
** Life cycle of these two pairs: *add* and *del* are called repeatedly as
tasks switch in-and-out; *start* and *stop* is also called when the perf core
needs a quick stop-and-start, for instance, when the interrupt period is being
adjusted.
Current implementation is sufficient for now and can be easily extended to
features in the future.
A. Related Structures
---------------------
* struct pmu: include/linux/perf_event.h
* struct riscv_pmu: arch/riscv/include/asm/perf_event.h
Both structures are designed to be read-only.
*struct pmu* defines some function pointer interfaces, and most of them take
*struct perf_event* as a main argument, dealing with perf events according to
perf's internal state machine (check kernel/events/core.c for details).
*struct riscv_pmu* defines PMU-specific parameters. The naming follows the
convention of all other architectures.
* struct perf_event: include/linux/perf_event.h
* struct hw_perf_event
The generic structure that represents perf events, and the hardware-related
details.
* struct riscv_hw_events: arch/riscv/include/asm/perf_event.h
The structure that holds the status of events, has two fixed members:
the number of events and the array of the events.
References
----------
[1] https://github.com/riscv/riscv-linux/pull/124
[2] https://groups.google.com/a/groups.riscv.org/forum/#!topic/sw-dev/f19TmCNP6yA

Просмотреть файл

@ -12179,7 +12179,7 @@ F: drivers/mtd/nand/raw/r852.h
RISC-V ARCHITECTURE
M: Palmer Dabbelt <palmer@sifive.com>
M: Albert Ou <albert@sifive.com>
M: Albert Ou <aou@eecs.berkeley.edu>
L: linux-riscv@lists.infradead.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/palmer/riscv-linux.git
S: Supported
@ -12939,6 +12939,14 @@ F: drivers/media/usb/siano/
F: drivers/media/usb/siano/
F: drivers/media/mmc/siano/
SIFIVE DRIVERS
M: Palmer Dabbelt <palmer@sifive.com>
L: linux-riscv@lists.infradead.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/palmer/riscv-linux.git
S: Supported
K: sifive
N: sifive
SILEAD TOUCHSCREEN DRIVER
M: Hans de Goede <hdegoede@redhat.com>
L: linux-input@vger.kernel.org

Просмотреть файл

@ -32,6 +32,7 @@ config RISCV
select HAVE_MEMBLOCK_NODE_MAP
select HAVE_DMA_CONTIGUOUS
select HAVE_GENERIC_DMA_COHERENT
select HAVE_PERF_EVENTS
select IRQ_DOMAIN
select NO_BOOTMEM
select RISCV_ISA_A if SMP
@ -193,6 +194,19 @@ config RISCV_ISA_C
config RISCV_ISA_A
def_bool y
menu "supported PMU type"
depends on PERF_EVENTS
config RISCV_BASE_PMU
bool "Base Performance Monitoring Unit"
def_bool y
help
A base PMU that serves as a reference implementation and has limited
feature of perf. It can run on any RISC-V machines so serves as the
fallback, but this option can also be disable to reduce kernel size.
endmenu
endmenu
menu "Kernel type"

Просмотреть файл

@ -71,6 +71,9 @@ KBUILD_CFLAGS_MODULE += $(call cc-option,-mno-relax)
# architectures. It's faster to have GCC emit only aligned accesses.
KBUILD_CFLAGS += $(call cc-option,-mstrict-align)
# arch specific predefines for sparse
CHECKFLAGS += -D__riscv -D__riscv_xlen=$(BITS)
head-y := arch/riscv/kernel/head.o
core-y += arch/riscv/kernel/ arch/riscv/mm/

Просмотреть файл

@ -44,6 +44,7 @@ CONFIG_INPUT_MOUSEDEV=y
CONFIG_SERIAL_8250=y
CONFIG_SERIAL_8250_CONSOLE=y
CONFIG_SERIAL_OF_PLATFORM=y
CONFIG_HVC_RISCV_SBI=y
# CONFIG_PTP_1588_CLOCK is not set
CONFIG_DRM=y
CONFIG_DRM_RADEON=y

Просмотреть файл

@ -25,6 +25,7 @@ generic-y += kdebug.h
generic-y += kmap_types.h
generic-y += kvm_para.h
generic-y += local.h
generic-y += local64.h
generic-y += mm-arch-hooks.h
generic-y += mman.h
generic-y += module.h

Просмотреть файл

@ -47,7 +47,7 @@ static inline void flush_dcache_page(struct page *page)
#else /* CONFIG_SMP */
#define flush_icache_all() sbi_remote_fence_i(0)
#define flush_icache_all() sbi_remote_fence_i(NULL)
void flush_icache_mm(struct mm_struct *mm, bool local);
#endif /* CONFIG_SMP */

Просмотреть файл

@ -0,0 +1,84 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2018 SiFive
* Copyright (C) 2018 Andes Technology Corporation
*
*/
#ifndef _ASM_RISCV_PERF_EVENT_H
#define _ASM_RISCV_PERF_EVENT_H
#include <linux/perf_event.h>
#include <linux/ptrace.h>
#define RISCV_BASE_COUNTERS 2
/*
* The RISCV_MAX_COUNTERS parameter should be specified.
*/
#ifdef CONFIG_RISCV_BASE_PMU
#define RISCV_MAX_COUNTERS 2
#endif
#ifndef RISCV_MAX_COUNTERS
#error "Please provide a valid RISCV_MAX_COUNTERS for the PMU."
#endif
/*
* These are the indexes of bits in counteren register *minus* 1,
* except for cycle. It would be coherent if it can directly mapped
* to counteren bit definition, but there is a *time* register at
* counteren[1]. Per-cpu structure is scarce resource here.
*
* According to the spec, an implementation can support counter up to
* mhpmcounter31, but many high-end processors has at most 6 general
* PMCs, we give the definition to MHPMCOUNTER8 here.
*/
#define RISCV_PMU_CYCLE 0
#define RISCV_PMU_INSTRET 1
#define RISCV_PMU_MHPMCOUNTER3 2
#define RISCV_PMU_MHPMCOUNTER4 3
#define RISCV_PMU_MHPMCOUNTER5 4
#define RISCV_PMU_MHPMCOUNTER6 5
#define RISCV_PMU_MHPMCOUNTER7 6
#define RISCV_PMU_MHPMCOUNTER8 7
#define RISCV_OP_UNSUPP (-EOPNOTSUPP)
struct cpu_hw_events {
/* # currently enabled events*/
int n_events;
/* currently enabled events */
struct perf_event *events[RISCV_MAX_COUNTERS];
/* vendor-defined PMU data */
void *platform;
};
struct riscv_pmu {
struct pmu *pmu;
/* generic hw/cache events table */
const int *hw_events;
const int (*cache_events)[PERF_COUNT_HW_CACHE_MAX]
[PERF_COUNT_HW_CACHE_OP_MAX]
[PERF_COUNT_HW_CACHE_RESULT_MAX];
/* method used to map hw/cache events */
int (*map_hw_event)(u64 config);
int (*map_cache_event)(u64 config);
/* max generic hw events in map */
int max_events;
/* number total counters, 2(base) + x(general) */
int num_counters;
/* the width of the counter */
int counter_width;
/* vendor-defined PMU features */
void *platform;
irqreturn_t (*handle_irq)(int irq_num, void *dev);
int irq;
};
#endif /* _ASM_RISCV_PERF_EVENT_H */

Просмотреть файл

@ -49,7 +49,7 @@ static inline void flush_tlb_range(struct vm_area_struct *vma,
#include <asm/sbi.h>
#define flush_tlb_all() sbi_remote_sfence_vma(0, 0, -1)
#define flush_tlb_all() sbi_remote_sfence_vma(NULL, 0, -1)
#define flush_tlb_page(vma, addr) flush_tlb_range(vma, addr, 0)
#define flush_tlb_range(vma, start, end) \
sbi_remote_sfence_vma(mm_cpumask((vma)->vm_mm)->bits, \

Просмотреть файл

@ -392,19 +392,21 @@ do { \
})
extern unsigned long __must_check __copy_user(void __user *to,
extern unsigned long __must_check __asm_copy_to_user(void __user *to,
const void *from, unsigned long n);
extern unsigned long __must_check __asm_copy_from_user(void *to,
const void __user *from, unsigned long n);
static inline unsigned long
raw_copy_from_user(void *to, const void __user *from, unsigned long n)
{
return __copy_user(to, from, n);
return __asm_copy_to_user(to, from, n);
}
static inline unsigned long
raw_copy_to_user(void __user *to, const void *from, unsigned long n)
{
return __copy_user(to, from, n);
return __asm_copy_from_user(to, from, n);
}
extern long strncpy_from_user(char *dest, const char __user *src, long count);

Просмотреть файл

@ -39,4 +39,6 @@ obj-$(CONFIG_MODULE_SECTIONS) += module-sections.o
obj-$(CONFIG_FUNCTION_TRACER) += mcount.o ftrace.o
obj-$(CONFIG_DYNAMIC_FTRACE) += mcount-dyn.o
obj-$(CONFIG_PERF_EVENTS) += perf_event.o
clean:

Просмотреть файл

@ -126,5 +126,5 @@ do_trace:
RESTORE_ABI_STATE
ret
ENDPROC(_mcount)
EXPORT_SYMBOL(_mcount)
#endif
EXPORT_SYMBOL(_mcount)

Просмотреть файл

@ -17,6 +17,17 @@
#include <linux/errno.h>
#include <linux/moduleloader.h>
static int apply_r_riscv_32_rela(struct module *me, u32 *location, Elf_Addr v)
{
if (v != (u32)v) {
pr_err("%s: value %016llx out of range for 32-bit field\n",
me->name, v);
return -EINVAL;
}
*location = v;
return 0;
}
static int apply_r_riscv_64_rela(struct module *me, u32 *location, Elf_Addr v)
{
*(u64 *)location = v;
@ -265,6 +276,7 @@ static int apply_r_riscv_sub32_rela(struct module *me, u32 *location,
static int (*reloc_handlers_rela[]) (struct module *me, u32 *location,
Elf_Addr v) = {
[R_RISCV_32] = apply_r_riscv_32_rela,
[R_RISCV_64] = apply_r_riscv_64_rela,
[R_RISCV_BRANCH] = apply_r_riscv_branch_rela,
[R_RISCV_JAL] = apply_r_riscv_jal_rela,

Просмотреть файл

@ -0,0 +1,485 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2008 Thomas Gleixner <tglx@linutronix.de>
* Copyright (C) 2008-2009 Red Hat, Inc., Ingo Molnar
* Copyright (C) 2009 Jaswinder Singh Rajput
* Copyright (C) 2009 Advanced Micro Devices, Inc., Robert Richter
* Copyright (C) 2008-2009 Red Hat, Inc., Peter Zijlstra
* Copyright (C) 2009 Intel Corporation, <markus.t.metzger@intel.com>
* Copyright (C) 2009 Google, Inc., Stephane Eranian
* Copyright 2014 Tilera Corporation. All Rights Reserved.
* Copyright (C) 2018 Andes Technology Corporation
*
* Perf_events support for RISC-V platforms.
*
* Since the spec. (as of now, Priv-Spec 1.10) does not provide enough
* functionality for perf event to fully work, this file provides
* the very basic framework only.
*
* For platform portings, please check Documentations/riscv/pmu.txt.
*
* The Copyright line includes x86 and tile ones.
*/
#include <linux/kprobes.h>
#include <linux/kernel.h>
#include <linux/kdebug.h>
#include <linux/mutex.h>
#include <linux/bitmap.h>
#include <linux/irq.h>
#include <linux/interrupt.h>
#include <linux/perf_event.h>
#include <linux/atomic.h>
#include <linux/of.h>
#include <asm/perf_event.h>
static const struct riscv_pmu *riscv_pmu __read_mostly;
static DEFINE_PER_CPU(struct cpu_hw_events, cpu_hw_events);
/*
* Hardware & cache maps and their methods
*/
static const int riscv_hw_event_map[] = {
[PERF_COUNT_HW_CPU_CYCLES] = RISCV_PMU_CYCLE,
[PERF_COUNT_HW_INSTRUCTIONS] = RISCV_PMU_INSTRET,
[PERF_COUNT_HW_CACHE_REFERENCES] = RISCV_OP_UNSUPP,
[PERF_COUNT_HW_CACHE_MISSES] = RISCV_OP_UNSUPP,
[PERF_COUNT_HW_BRANCH_INSTRUCTIONS] = RISCV_OP_UNSUPP,
[PERF_COUNT_HW_BRANCH_MISSES] = RISCV_OP_UNSUPP,
[PERF_COUNT_HW_BUS_CYCLES] = RISCV_OP_UNSUPP,
};
#define C(x) PERF_COUNT_HW_CACHE_##x
static const int riscv_cache_event_map[PERF_COUNT_HW_CACHE_MAX]
[PERF_COUNT_HW_CACHE_OP_MAX]
[PERF_COUNT_HW_CACHE_RESULT_MAX] = {
[C(L1D)] = {
[C(OP_READ)] = {
[C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
[C(RESULT_MISS)] = RISCV_OP_UNSUPP,
},
[C(OP_WRITE)] = {
[C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
[C(RESULT_MISS)] = RISCV_OP_UNSUPP,
},
[C(OP_PREFETCH)] = {
[C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
[C(RESULT_MISS)] = RISCV_OP_UNSUPP,
},
},
[C(L1I)] = {
[C(OP_READ)] = {
[C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
[C(RESULT_MISS)] = RISCV_OP_UNSUPP,
},
[C(OP_WRITE)] = {
[C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
[C(RESULT_MISS)] = RISCV_OP_UNSUPP,
},
[C(OP_PREFETCH)] = {
[C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
[C(RESULT_MISS)] = RISCV_OP_UNSUPP,
},
},
[C(LL)] = {
[C(OP_READ)] = {
[C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
[C(RESULT_MISS)] = RISCV_OP_UNSUPP,
},
[C(OP_WRITE)] = {
[C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
[C(RESULT_MISS)] = RISCV_OP_UNSUPP,
},
[C(OP_PREFETCH)] = {
[C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
[C(RESULT_MISS)] = RISCV_OP_UNSUPP,
},
},
[C(DTLB)] = {
[C(OP_READ)] = {
[C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
[C(RESULT_MISS)] = RISCV_OP_UNSUPP,
},
[C(OP_WRITE)] = {
[C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
[C(RESULT_MISS)] = RISCV_OP_UNSUPP,
},
[C(OP_PREFETCH)] = {
[C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
[C(RESULT_MISS)] = RISCV_OP_UNSUPP,
},
},
[C(ITLB)] = {
[C(OP_READ)] = {
[C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
[C(RESULT_MISS)] = RISCV_OP_UNSUPP,
},
[C(OP_WRITE)] = {
[C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
[C(RESULT_MISS)] = RISCV_OP_UNSUPP,
},
[C(OP_PREFETCH)] = {
[C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
[C(RESULT_MISS)] = RISCV_OP_UNSUPP,
},
},
[C(BPU)] = {
[C(OP_READ)] = {
[C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
[C(RESULT_MISS)] = RISCV_OP_UNSUPP,
},
[C(OP_WRITE)] = {
[C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
[C(RESULT_MISS)] = RISCV_OP_UNSUPP,
},
[C(OP_PREFETCH)] = {
[C(RESULT_ACCESS)] = RISCV_OP_UNSUPP,
[C(RESULT_MISS)] = RISCV_OP_UNSUPP,
},
},
};
static int riscv_map_hw_event(u64 config)
{
if (config >= riscv_pmu->max_events)
return -EINVAL;
return riscv_pmu->hw_events[config];
}
int riscv_map_cache_decode(u64 config, unsigned int *type,
unsigned int *op, unsigned int *result)
{
return -ENOENT;
}
static int riscv_map_cache_event(u64 config)
{
unsigned int type, op, result;
int err = -ENOENT;
int code;
err = riscv_map_cache_decode(config, &type, &op, &result);
if (!riscv_pmu->cache_events || err)
return err;
if (type >= PERF_COUNT_HW_CACHE_MAX ||
op >= PERF_COUNT_HW_CACHE_OP_MAX ||
result >= PERF_COUNT_HW_CACHE_RESULT_MAX)
return -EINVAL;
code = (*riscv_pmu->cache_events)[type][op][result];
if (code == RISCV_OP_UNSUPP)
return -EINVAL;
return code;
}
/*
* Low-level functions: reading/writing counters
*/
static inline u64 read_counter(int idx)
{
u64 val = 0;
switch (idx) {
case RISCV_PMU_CYCLE:
val = csr_read(cycle);
break;
case RISCV_PMU_INSTRET:
val = csr_read(instret);
break;
default:
WARN_ON_ONCE(idx < 0 || idx > RISCV_MAX_COUNTERS);
return -EINVAL;
}
return val;
}
static inline void write_counter(int idx, u64 value)
{
/* currently not supported */
WARN_ON_ONCE(1);
}
/*
* pmu->read: read and update the counter
*
* Other architectures' implementation often have a xxx_perf_event_update
* routine, which can return counter values when called in the IRQ, but
* return void when being called by the pmu->read method.
*/
static void riscv_pmu_read(struct perf_event *event)
{
struct hw_perf_event *hwc = &event->hw;
u64 prev_raw_count, new_raw_count;
u64 oldval;
int idx = hwc->idx;
u64 delta;
do {
prev_raw_count = local64_read(&hwc->prev_count);
new_raw_count = read_counter(idx);
oldval = local64_cmpxchg(&hwc->prev_count, prev_raw_count,
new_raw_count);
} while (oldval != prev_raw_count);
/*
* delta is the value to update the counter we maintain in the kernel.
*/
delta = (new_raw_count - prev_raw_count) &
((1ULL << riscv_pmu->counter_width) - 1);
local64_add(delta, &event->count);
/*
* Something like local64_sub(delta, &hwc->period_left) here is
* needed if there is an interrupt for perf.
*/
}
/*
* State transition functions:
*
* stop()/start() & add()/del()
*/
/*
* pmu->stop: stop the counter
*/
static void riscv_pmu_stop(struct perf_event *event, int flags)
{
struct hw_perf_event *hwc = &event->hw;
WARN_ON_ONCE(hwc->state & PERF_HES_STOPPED);
hwc->state |= PERF_HES_STOPPED;
if ((flags & PERF_EF_UPDATE) && !(hwc->state & PERF_HES_UPTODATE)) {
riscv_pmu->pmu->read(event);
hwc->state |= PERF_HES_UPTODATE;
}
}
/*
* pmu->start: start the event.
*/
static void riscv_pmu_start(struct perf_event *event, int flags)
{
struct hw_perf_event *hwc = &event->hw;
if (WARN_ON_ONCE(!(event->hw.state & PERF_HES_STOPPED)))
return;
if (flags & PERF_EF_RELOAD) {
WARN_ON_ONCE(!(event->hw.state & PERF_HES_UPTODATE));
/*
* Set the counter to the period to the next interrupt here,
* if you have any.
*/
}
hwc->state = 0;
perf_event_update_userpage(event);
/*
* Since we cannot write to counters, this serves as an initialization
* to the delta-mechanism in pmu->read(); otherwise, the delta would be
* wrong when pmu->read is called for the first time.
*/
local64_set(&hwc->prev_count, read_counter(hwc->idx));
}
/*
* pmu->add: add the event to PMU.
*/
static int riscv_pmu_add(struct perf_event *event, int flags)
{
struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
struct hw_perf_event *hwc = &event->hw;
if (cpuc->n_events == riscv_pmu->num_counters)
return -ENOSPC;
/*
* We don't have general conunters, so no binding-event-to-counter
* process here.
*
* Indexing using hwc->config generally not works, since config may
* contain extra information, but here the only info we have in
* hwc->config is the event index.
*/
hwc->idx = hwc->config;
cpuc->events[hwc->idx] = event;
cpuc->n_events++;
hwc->state = PERF_HES_UPTODATE | PERF_HES_STOPPED;
if (flags & PERF_EF_START)
riscv_pmu->pmu->start(event, PERF_EF_RELOAD);
return 0;
}
/*
* pmu->del: delete the event from PMU.
*/
static void riscv_pmu_del(struct perf_event *event, int flags)
{
struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
struct hw_perf_event *hwc = &event->hw;
cpuc->events[hwc->idx] = NULL;
cpuc->n_events--;
riscv_pmu->pmu->stop(event, PERF_EF_UPDATE);
perf_event_update_userpage(event);
}
/*
* Interrupt: a skeletion for reference.
*/
static DEFINE_MUTEX(pmc_reserve_mutex);
irqreturn_t riscv_base_pmu_handle_irq(int irq_num, void *dev)
{
return IRQ_NONE;
}
static int reserve_pmc_hardware(void)
{
int err = 0;
mutex_lock(&pmc_reserve_mutex);
if (riscv_pmu->irq >= 0 && riscv_pmu->handle_irq) {
err = request_irq(riscv_pmu->irq, riscv_pmu->handle_irq,
IRQF_PERCPU, "riscv-base-perf", NULL);
}
mutex_unlock(&pmc_reserve_mutex);
return err;
}
void release_pmc_hardware(void)
{
mutex_lock(&pmc_reserve_mutex);
if (riscv_pmu->irq >= 0)
free_irq(riscv_pmu->irq, NULL);
mutex_unlock(&pmc_reserve_mutex);
}
/*
* Event Initialization/Finalization
*/
static atomic_t riscv_active_events = ATOMIC_INIT(0);
static void riscv_event_destroy(struct perf_event *event)
{
if (atomic_dec_return(&riscv_active_events) == 0)
release_pmc_hardware();
}
static int riscv_event_init(struct perf_event *event)
{
struct perf_event_attr *attr = &event->attr;
struct hw_perf_event *hwc = &event->hw;
int err;
int code;
if (atomic_inc_return(&riscv_active_events) == 1) {
err = reserve_pmc_hardware();
if (err) {
pr_warn("PMC hardware not available\n");
atomic_dec(&riscv_active_events);
return -EBUSY;
}
}
switch (event->attr.type) {
case PERF_TYPE_HARDWARE:
code = riscv_pmu->map_hw_event(attr->config);
break;
case PERF_TYPE_HW_CACHE:
code = riscv_pmu->map_cache_event(attr->config);
break;
case PERF_TYPE_RAW:
return -EOPNOTSUPP;
default:
return -ENOENT;
}
event->destroy = riscv_event_destroy;
if (code < 0) {
event->destroy(event);
return code;
}
/*
* idx is set to -1 because the index of a general event should not be
* decided until binding to some counter in pmu->add().
*
* But since we don't have such support, later in pmu->add(), we just
* use hwc->config as the index instead.
*/
hwc->config = code;
hwc->idx = -1;
return 0;
}
/*
* Initialization
*/
static struct pmu min_pmu = {
.name = "riscv-base",
.event_init = riscv_event_init,
.add = riscv_pmu_add,
.del = riscv_pmu_del,
.start = riscv_pmu_start,
.stop = riscv_pmu_stop,
.read = riscv_pmu_read,
};
static const struct riscv_pmu riscv_base_pmu = {
.pmu = &min_pmu,
.max_events = ARRAY_SIZE(riscv_hw_event_map),
.map_hw_event = riscv_map_hw_event,
.hw_events = riscv_hw_event_map,
.map_cache_event = riscv_map_cache_event,
.cache_events = &riscv_cache_event_map,
.counter_width = 63,
.num_counters = RISCV_BASE_COUNTERS + 0,
.handle_irq = &riscv_base_pmu_handle_irq,
/* This means this PMU has no IRQ. */
.irq = -1,
};
static const struct of_device_id riscv_pmu_of_ids[] = {
{.compatible = "riscv,base-pmu", .data = &riscv_base_pmu},
{ /* sentinel value */ }
};
int __init init_hw_perf_events(void)
{
struct device_node *node = of_find_node_by_type(NULL, "pmu");
const struct of_device_id *of_id;
riscv_pmu = &riscv_base_pmu;
if (node) {
of_id = of_match_node(riscv_pmu_of_ids, node);
if (of_id)
riscv_pmu = of_id->data;
}
perf_pmu_register(riscv_pmu->pmu, "cpu", PERF_TYPE_RAW);
return 0;
}
arch_initcall(init_hw_perf_events);

Просмотреть файл

@ -13,6 +13,7 @@
* Assembly functions that may be used (directly or indirectly) by modules
*/
EXPORT_SYMBOL(__clear_user);
EXPORT_SYMBOL(__copy_user);
EXPORT_SYMBOL(__asm_copy_to_user);
EXPORT_SYMBOL(__asm_copy_from_user);
EXPORT_SYMBOL(memset);
EXPORT_SYMBOL(memcpy);

Просмотреть файл

@ -148,7 +148,7 @@ int is_valid_bugaddr(unsigned long pc)
if (pc < PAGE_OFFSET)
return 0;
if (probe_kernel_address((bug_insn_t __user *)pc, insn))
if (probe_kernel_address((bug_insn_t *)pc, insn))
return 0;
return (insn == __BUG_INSN);
}

Просмотреть файл

@ -13,7 +13,8 @@ _epc:
.previous
.endm
ENTRY(__copy_user)
ENTRY(__asm_copy_to_user)
ENTRY(__asm_copy_from_user)
/* Enable access to user memory */
li t6, SR_SUM
@ -63,7 +64,8 @@ ENTRY(__copy_user)
addi a0, a0, 1
bltu a1, a3, 5b
j 3b
ENDPROC(__copy_user)
ENDPROC(__asm_copy_to_user)
ENDPROC(__asm_copy_from_user)
ENTRY(__clear_user)
@ -84,7 +86,7 @@ ENTRY(__clear_user)
bgeu t0, t1, 2f
bltu a0, t0, 4f
1:
fixup REG_S, zero, (a0), 10f
fixup REG_S, zero, (a0), 11f
addi a0, a0, SZREG
bltu a0, t1, 1b
2:
@ -96,12 +98,12 @@ ENTRY(__clear_user)
li a0, 0
ret
4: /* Edge case: unalignment */
fixup sb, zero, (a0), 10f
fixup sb, zero, (a0), 11f
addi a0, a0, 1
bltu a0, t0, 4b
j 1b
5: /* Edge case: remainder */
fixup sb, zero, (a0), 10f
fixup sb, zero, (a0), 11f
addi a0, a0, 1
bltu a0, a3, 5b
j 3b
@ -109,9 +111,14 @@ ENDPROC(__clear_user)
.section .fixup,"ax"
.balign 4
/* Fixup code for __copy_user(10) and __clear_user(11) */
10:
/* Disable access to user memory */
csrs sstatus, t6
sub a0, a3, a0
mv a0, a2
ret
11:
csrs sstatus, t6
mv a0, a1
ret
.previous