Merge drm/drm-next into drm-misc-next

We need commit f8f6ae5d07 ("mm: always have io_remap_pfn_range() set
pgprot_decrypted()") to be able to merge Jason's cleanup patch.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
This commit is contained in:
Thomas Zimmermann 2020-11-10 17:11:37 +01:00
Родитель 55c8bcaecc 512bce50a4
Коммит 112e505a76
911 изменённых файлов: 295017 добавлений и 7267 удалений

Просмотреть файл

@ -1,29 +1,29 @@
What: sys/devices/pciXXXX:XX/0000:XX:XX.X/dma/dma<n>chan<n>/quickdata/cap
What: /sys/devices/pciXXXX:XX/0000:XX:XX.X/dma/dma<n>chan<n>/quickdata/cap
Date: December 3, 2009
KernelVersion: 2.6.32
Contact: dmaengine@vger.kernel.org
Description: Capabilities the DMA supports.Currently there are DMA_PQ, DMA_PQ_VAL,
DMA_XOR,DMA_XOR_VAL,DMA_INTERRUPT.
What: sys/devices/pciXXXX:XX/0000:XX:XX.X/dma/dma<n>chan<n>/quickdata/ring_active
What: /sys/devices/pciXXXX:XX/0000:XX:XX.X/dma/dma<n>chan<n>/quickdata/ring_active
Date: December 3, 2009
KernelVersion: 2.6.32
Contact: dmaengine@vger.kernel.org
Description: The number of descriptors active in the ring.
What: sys/devices/pciXXXX:XX/0000:XX:XX.X/dma/dma<n>chan<n>/quickdata/ring_size
What: /sys/devices/pciXXXX:XX/0000:XX:XX.X/dma/dma<n>chan<n>/quickdata/ring_size
Date: December 3, 2009
KernelVersion: 2.6.32
Contact: dmaengine@vger.kernel.org
Description: Descriptor ring size, total number of descriptors available.
What: sys/devices/pciXXXX:XX/0000:XX:XX.X/dma/dma<n>chan<n>/quickdata/version
What: /sys/devices/pciXXXX:XX/0000:XX:XX.X/dma/dma<n>chan<n>/quickdata/version
Date: December 3, 2009
KernelVersion: 2.6.32
Contact: dmaengine@vger.kernel.org
Description: Version of ioatdma device.
What: sys/devices/pciXXXX:XX/0000:XX:XX.X/dma/dma<n>chan<n>/quickdata/intr_coalesce
What: /sys/devices/pciXXXX:XX/0000:XX:XX.X/dma/dma<n>chan<n>/quickdata/intr_coalesce
Date: August 8, 2017
KernelVersion: 4.14
Contact: dmaengine@vger.kernel.org

Просмотреть файл

@ -152,7 +152,7 @@ Description:
When an interface is under test, it cannot be expected
to pass packets as normal.
What: /sys/clas/net/<iface>/duplex
What: /sys/class/net/<iface>/duplex
Date: October 2009
KernelVersion: 2.6.33
Contact: netdev@vger.kernel.org

Просмотреть файл

@ -26,6 +26,10 @@ BUILDDIR = $(obj)/output
PDFLATEX = xelatex
LATEXOPTS = -interaction=batchmode
ifeq ($(KBUILD_VERBOSE),0)
SPHINXOPTS += "-q"
endif
# User-friendly check for sphinx-build
HAVE_SPHINX := $(shell if which $(SPHINXBUILD) >/dev/null 2>&1; then echo 1; else echo 0; fi)

Просмотреть файл

@ -107,7 +107,7 @@ for a UID/GID will prevent that UID/GID from obtaining auxiliary setid
privileges, such as allowing a user to set up user namespace UID/GID mappings.
Note on GID policies and setgroups()
==================
====================================
In v5.9 we are adding support for limiting CAP_SETGID privileges as was done
previously for CAP_SETUID. However, for compatibility with common sandboxing
related code conventions in userspace, we currently allow arbitrary

Просмотреть файл

@ -478,7 +478,7 @@ order to ask the hardware to enter that state. Also, for each
statistics of the given idle state. That information is exposed by the kernel
via ``sysfs``.
For each CPU in the system, there is a :file:`/sys/devices/system/cpu<N>/cpuidle/`
For each CPU in the system, there is a :file:`/sys/devices/system/cpu/cpu<N>/cpuidle/`
directory in ``sysfs``, where the number ``<N>`` is assigned to the given
CPU at the initialization time. That directory contains a set of subdirectories
called :file:`state0`, :file:`state1` and so on, up to the number of idle state
@ -494,7 +494,7 @@ object corresponding to it, as follows:
residency.
``below``
Total number of times this idle state had been asked for, but cerainly
Total number of times this idle state had been asked for, but certainly
a deeper idle state would have been a better match for the observed idle
duration.

Просмотреть файл

@ -300,6 +300,7 @@ Note:
0: 0 1 2 3 4 5 6 7
RSS hash key:
84:50:f4:00:a8:15:d1:a7:e9:7f:1d:60:35:c7:47:25:42:97:74:ca:56:bb:b6:a1:d8:43:e3:c9:0c:fd:17:55:c2:3a:4d:69:ed:f1:42:89
netdev_tstamp_prequeue
----------------------

Просмотреть файл

@ -148,3 +148,13 @@ SunXi family
* User Manual
http://dl.linux-sunxi.org/A64/Allwinner%20A64%20User%20Manual%20v1.0.pdf
- Allwinner H6
* Datasheet
https://linux-sunxi.org/images/5/5c/Allwinner_H6_V200_Datasheet_V1.1.pdf
* User Manual
https://linux-sunxi.org/images/4/46/Allwinner_H6_V200_User_Manual_V1.1.pdf

Просмотреть файл

@ -51,7 +51,7 @@ if major >= 3:
support for Sphinx v3.0 and above is brand new. Be prepared for
possible issues in the generated output.
''')
if minor > 0 or patch >= 2:
if (major > 3) or (minor > 0 or patch >= 2):
# Sphinx c function parser is more pedantic with regards to type
# checking. Due to that, having macros at c:function cause problems.
# Those needed to be scaped by using c_id_attributes[] array

Просмотреть файл

@ -295,11 +295,13 @@ print the number of the test and the status of the test:
pass::
ok 28 - kmalloc_double_kzfree
or, if kmalloc failed::
# kmalloc_large_oob_right: ASSERTION FAILED at lib/test_kasan.c:163
Expected ptr is not null, but is
not ok 4 - kmalloc_large_oob_right
or, if a KASAN report was expected, but not found::
# kmalloc_double_kzfree: EXPECTATION FAILED at lib/test_kasan.c:629

Просмотреть файл

@ -197,7 +197,7 @@ Now add the following to ``drivers/misc/Kconfig``:
config MISC_EXAMPLE_TEST
bool "Test for my example"
depends on MISC_EXAMPLE && KUNIT
depends on MISC_EXAMPLE && KUNIT=y
and the following to ``drivers/misc/Makefile``:

Просмотреть файл

@ -561,6 +561,11 @@ Once the kernel is built and installed, a simple
...will run the tests.
.. note::
Note that you should make sure your test depends on ``KUNIT=y`` in Kconfig
if the test does not support module build. Otherwise, it will trigger
compile errors if ``CONFIG_KUNIT`` is ``m``.
Writing new tests for other architectures
-----------------------------------------

Просмотреть файл

@ -4,7 +4,7 @@ Clock control registers reside in different Hi6220 system controllers,
please refer the following document to know more about the binding rules
for these system controllers:
Documentation/devicetree/bindings/arm/hisilicon/hisilicon.txt
Documentation/devicetree/bindings/arm/hisilicon/hisilicon.yaml
Required Properties:

Просмотреть файл

@ -32,6 +32,11 @@ description: |
| | vint | bit | | 0 |.....|63| vintx |
| +--------------+ +------------+ |
| |
| Unmap |
| +--------------+ |
Unmapped events ---->| | umapidx |-------------------------> Globalevents
| +--------------+ |
| |
+-----------------------------------------+
Configuration of these Intmap registers that maps global events to vint is
@ -70,6 +75,11 @@ properties:
- description: |
"limit" specifies the limit for translation
ti,unmapped-event-sources:
$ref: /schemas/types.yaml#definitions/phandle-array
description:
Array of phandles to DMA controllers where the unmapped events originate.
required:
- compatible
- reg

Просмотреть файл

@ -0,0 +1,18 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/net/can/can-controller.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: CAN Controller Generic Binding
maintainers:
- Marc Kleine-Budde <mkl@pengutronix.de>
properties:
$nodename:
pattern: "^can(@.*)?$"
additionalProperties: true
...

Просмотреть файл

@ -0,0 +1,135 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/net/can/fsl,flexcan.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title:
Flexcan CAN controller on Freescale's ARM and PowerPC system-on-a-chip (SOC).
maintainers:
- Marc Kleine-Budde <mkl@pengutronix.de>
allOf:
- $ref: can-controller.yaml#
properties:
compatible:
oneOf:
- enum:
- fsl,imx8qm-flexcan
- fsl,imx8mp-flexcan
- fsl,imx6q-flexcan
- fsl,imx53-flexcan
- fsl,imx35-flexcan
- fsl,imx28-flexcan
- fsl,imx25-flexcan
- fsl,p1010-flexcan
- fsl,vf610-flexcan
- fsl,ls1021ar2-flexcan
- fsl,lx2160ar1-flexcan
- items:
- enum:
- fsl,imx7d-flexcan
- fsl,imx6ul-flexcan
- fsl,imx6sx-flexcan
- const: fsl,imx6q-flexcan
- items:
- enum:
- fsl,ls1028ar1-flexcan
- const: fsl,lx2160ar1-flexcan
reg:
maxItems: 1
interrupts:
maxItems: 1
clocks:
maxItems: 2
clock-names:
items:
- const: ipg
- const: per
clock-frequency:
description: |
The oscillator frequency driving the flexcan device, filled in by the
boot loader. This property should only be used the used operating system
doesn't support the clocks and clock-names property.
xceiver-supply:
description: Regulator that powers the CAN transceiver.
big-endian:
$ref: /schemas/types.yaml#/definitions/flag
description: |
This means the registers of FlexCAN controller are big endian. This is
optional property.i.e. if this property is not present in device tree
node then controller is assumed to be little endian. If this property is
present then controller is assumed to be big endian.
fsl,stop-mode:
description: |
Register bits of stop mode control.
The format should be as follows:
<gpr req_gpr req_bit>
gpr is the phandle to general purpose register node.
req_gpr is the gpr register offset of CAN stop request.
req_bit is the bit offset of CAN stop request.
$ref: /schemas/types.yaml#/definitions/phandle-array
items:
- description: The 'gpr' is the phandle to general purpose register node.
- description: The 'req_gpr' is the gpr register offset of CAN stop request.
maximum: 0xff
- description: The 'req_bit' is the bit offset of CAN stop request.
maximum: 0x1f
fsl,clk-source:
description: |
Select the clock source to the CAN Protocol Engine (PE). It's SoC
implementation dependent. Refer to RM for detailed definition. If this
property is not set in device tree node then driver selects clock source 1
by default.
0: clock source 0 (oscillator clock)
1: clock source 1 (peripheral clock)
$ref: /schemas/types.yaml#/definitions/uint32
default: 1
minimum: 0
maximum: 1
wakeup-source:
$ref: /schemas/types.yaml#/definitions/flag
description:
Enable CAN remote wakeup.
required:
- compatible
- reg
- interrupts
additionalProperties: false
examples:
- |
can@1c000 {
compatible = "fsl,p1010-flexcan";
reg = <0x1c000 0x1000>;
interrupts = <48 0x2>;
interrupt-parent = <&mpic>;
clock-frequency = <200000000>;
fsl,clk-source = <0>;
};
- |
#include <dt-bindings/interrupt-controller/irq.h>
can@2090000 {
compatible = "fsl,imx6q-flexcan";
reg = <0x02090000 0x4000>;
interrupts = <0 110 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clks 1>, <&clks 2>;
clock-names = "ipg", "per";
fsl,stop-mode = <&gpr 0x34 28>;
};

Просмотреть файл

@ -1,57 +0,0 @@
Flexcan CAN controller on Freescale's ARM and PowerPC system-on-a-chip (SOC).
Required properties:
- compatible : Should be "fsl,<processor>-flexcan"
where <processor> is imx8qm, imx6q, imx28, imx53, imx35, imx25, p1010,
vf610, ls1021ar2, lx2160ar1, ls1028ar1.
The ls1028ar1 must be followed by lx2160ar1, e.g.
- "fsl,ls1028ar1-flexcan", "fsl,lx2160ar1-flexcan"
An implementation should also claim any of the following compatibles
that it is fully backwards compatible with:
- fsl,p1010-flexcan
- reg : Offset and length of the register set for this device
- interrupts : Interrupt tuple for this device
Optional properties:
- clock-frequency : The oscillator frequency driving the flexcan device
- xceiver-supply: Regulator that powers the CAN transceiver
- big-endian: This means the registers of FlexCAN controller are big endian.
This is optional property.i.e. if this property is not present in
device tree node then controller is assumed to be little endian.
if this property is present then controller is assumed to be big
endian.
- fsl,stop-mode: register bits of stop mode control, the format is
<&gpr req_gpr req_bit>.
gpr is the phandle to general purpose register node.
req_gpr is the gpr register offset of CAN stop request.
req_bit is the bit offset of CAN stop request.
- fsl,clk-source: Select the clock source to the CAN Protocol Engine (PE).
It's SoC Implementation dependent. Refer to RM for detailed
definition. If this property is not set in device tree node
then driver selects clock source 1 by default.
0: clock source 0 (oscillator clock)
1: clock source 1 (peripheral clock)
- wakeup-source: enable CAN remote wakeup
Example:
can@1c000 {
compatible = "fsl,p1010-flexcan";
reg = <0x1c000 0x1000>;
interrupts = <48 0x2>;
interrupt-parent = <&mpic>;
clock-frequency = <200000000>; // filled in by bootloader
fsl,clk-source = <0>; // select clock source 0 for PE
};

Просмотреть файл

@ -86,9 +86,6 @@ Other Functions
.. kernel-doc:: fs/dax.c
:export:
.. kernel-doc:: fs/direct-io.c
:export:
.. kernel-doc:: fs/libfs.c
:export:

Просмотреть файл

@ -83,10 +83,6 @@ AMDGPU XGMI Support
===================
.. kernel-doc:: drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.c
:doc: AMDGPU XGMI Support
.. kernel-doc:: drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.c
:internal:
AMDGPU RAS Support
==================
@ -124,9 +120,6 @@ RAS VRAM Bad Pages sysfs Interface
.. kernel-doc:: drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
:doc: AMDGPU RAS sysfs gpu_vram_bad_pages Interface
.. kernel-doc:: drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
:internal:
Sample Code
-----------
Sample code for testing error injection can be found here:

Просмотреть файл

@ -118,6 +118,12 @@ Atomic Plane Helpers
.. kernel-doc:: drivers/gpu/drm/i915/display/intel_atomic_plane.c
:internal:
Asynchronous Page Flip
----------------------
.. kernel-doc:: drivers/gpu/drm/i915/display/intel_display.c
:doc: asynchronous flip implementation
Output Probing
--------------

Просмотреть файл

@ -20,7 +20,7 @@ ADM1266 is a sequencer that features voltage readback from 17 channels via an
integrated 12 bit SAR ADC, accessed using a PMBus interface.
The driver is a client driver to the core PMBus driver. Please see
Documentation/hwmon/pmbus for details on PMBus client drivers.
Documentation/hwmon/pmbus.rst for details on PMBus client drivers.
Sysfs entries

Просмотреть файл

@ -132,6 +132,7 @@ Hardware Monitoring Kernel Drivers
mcp3021
menf21bmc
mlxreg-fan
mp2975
nct6683
nct6775
nct7802

Просмотреть файл

@ -20,6 +20,7 @@ This driver implements support for Monolithic Power Systems, Inc. (MPS)
vendor dual-loop, digital, multi-phase controller MP2975.
This device:
- Supports up to two power rail.
- Provides 8 pulse-width modulations (PWMs), and can be configured up
to 8-phase operation for rail 1 and up to 4-phase operation for rail
@ -32,10 +33,12 @@ This device:
10-mV DAC, IMVP9 mode with 5-mV DAC.
Device supports:
- SVID interface.
- AVSBus interface.
Device complaint with:
- PMBus rev 1.3 interface.
Device supports direct format for reading output current, output voltage,
@ -45,11 +48,14 @@ Device supports VID and direct formats for reading output voltage.
The below VID modes are supported: VR12, VR13, IMVP9.
The driver provides the next attributes for the current:
- for current in: input, maximum alarm;
- for current out input, maximum alarm and highest values;
- for phase current: input and label.
attributes.
attributes.
The driver exports the following attributes via the 'sysfs' files, where
- 'n' is number of telemetry pages (from 1 to 2);
- 'k' is number of configured phases (from 1 to 8);
- indexes 1, 1*n for "iin";
@ -65,11 +71,14 @@ The driver exports the following attributes via the 'sysfs' files, where
**curr[1-{2n+k}]_label**
The driver provides the next attributes for the voltage:
- for voltage in: input, high critical threshold, high critical alarm, all only
from page 0;
- for voltage out: input, low and high critical thresholds, low and high
critical alarms, from pages 0 and 1;
The driver exports the following attributes via the 'sysfs' files, where
- 'n' is number of telemetry pages (from 1 to 2);
- indexes 1 for "iin";
- indexes n+1, n+2 for "vout";
@ -87,9 +96,12 @@ The driver exports the following attributes via the 'sysfs' files, where
**in[2-{n+1}1_lcrit_alarm**
The driver provides the next attributes for the power:
- for power in alarm and input.
- for power out: highest and input.
The driver exports the following attributes via the 'sysfs' files, where
- 'n' is number of telemetry pages (from 1 to 2);
- indexes 1 for "pin";
- indexes n+1, n+2 for "pout";

Просмотреть файл

@ -25,3 +25,4 @@ LEDs
leds-lp5562
leds-lp55xx
leds-mlxcpld
leds-sc27xx

Просмотреть файл

@ -42,6 +42,7 @@ The validator tracks lock-class usage history and divides the usage into
(4 usages * n STATEs + 1) categories:
where the 4 usages can be:
- 'ever held in STATE context'
- 'ever held as readlock in STATE context'
- 'ever held with STATE enabled'
@ -49,10 +50,12 @@ where the 4 usages can be:
where the n STATEs are coded in kernel/locking/lockdep_states.h and as of
now they include:
- hardirq
- softirq
where the last 1 category is:
- 'ever used' [ == !unused ]
When locking rules are violated, these usage bits are presented in the
@ -96,9 +99,9 @@ exact case is for the lock as of the reporting time.
+--------------+-------------+--------------+
| | irq enabled | irq disabled |
+--------------+-------------+--------------+
| ever in irq | ? | - |
| ever in irq | '?' | '-' |
+--------------+-------------+--------------+
| never in irq | + | . |
| never in irq | '+' | '.' |
+--------------+-------------+--------------+
The character '-' suggests irq is disabled because if otherwise the
@ -216,7 +219,7 @@ looks like this::
BD_MUTEX_PARTITION
};
mutex_lock_nested(&bdev->bd_contains->bd_mutex, BD_MUTEX_PARTITION);
mutex_lock_nested(&bdev->bd_contains->bd_mutex, BD_MUTEX_PARTITION);
In this case the locking is done on a bdev object that is known to be a
partition.
@ -334,7 +337,7 @@ Troubleshooting:
----------------
The validator tracks a maximum of MAX_LOCKDEP_KEYS number of lock classes.
Exceeding this number will trigger the following lockdep warning:
Exceeding this number will trigger the following lockdep warning::
(DEBUG_LOCKS_WARN_ON(id >= MAX_LOCKDEP_KEYS))
@ -420,7 +423,8 @@ the critical section of another reader of the same lock instance.
The difference between recursive readers and non-recursive readers is because:
recursive readers get blocked only by a write lock *holder*, while non-recursive
readers could get blocked by a write lock *waiter*. Considering the follow example:
readers could get blocked by a write lock *waiter*. Considering the follow
example::
TASK A: TASK B:
@ -448,20 +452,22 @@ There are simply four block conditions:
Block condition matrix, Y means the row blocks the column, and N means otherwise.
| E | r | R |
+---+---+---+---+
E | Y | Y | Y |
| | E | r | R |
+---+---+---+---+
r | Y | Y | N |
| E | Y | Y | Y |
+---+---+---+---+
| r | Y | Y | N |
+---+---+---+---+
| R | Y | Y | N |
+---+---+---+---+
R | Y | Y | N |
(W: writers, r: non-recursive readers, R: recursive readers)
acquired recursively. Unlike non-recursive read locks, recursive read locks
only get blocked by current write lock *holders* other than write lock
*waiters*, for example:
*waiters*, for example::
TASK A: TASK B:
@ -491,7 +497,7 @@ Recursive locks don't block each other, while non-recursive locks do (this is
even true for two non-recursive read locks). A non-recursive lock can block the
corresponding recursive lock, and vice versa.
A deadlock case with recursive locks involved is as follow:
A deadlock case with recursive locks involved is as follow::
TASK A: TASK B:
@ -510,7 +516,7 @@ because there are 3 types for lockers, there are, in theory, 9 types of lock
dependencies, but we can show that 4 types of lock dependencies are enough for
deadlock detection.
For each lock dependency:
For each lock dependency::
L1 -> L2
@ -525,20 +531,25 @@ same types).
With the above combination for simplification, there are 4 types of dependency edges
in the lockdep graph:
1) -(ER)->: exclusive writer to recursive reader dependency, "X -(ER)-> Y" means
1) -(ER)->:
exclusive writer to recursive reader dependency, "X -(ER)-> Y" means
X -> Y and X is a writer and Y is a recursive reader.
2) -(EN)->: exclusive writer to non-recursive locker dependency, "X -(EN)-> Y" means
2) -(EN)->:
exclusive writer to non-recursive locker dependency, "X -(EN)-> Y" means
X -> Y and X is a writer and Y is either a writer or non-recursive reader.
3) -(SR)->: shared reader to recursive reader dependency, "X -(SR)-> Y" means
3) -(SR)->:
shared reader to recursive reader dependency, "X -(SR)-> Y" means
X -> Y and X is a reader (recursive or not) and Y is a recursive reader.
4) -(SN)->: shared reader to non-recursive locker dependency, "X -(SN)-> Y" means
4) -(SN)->:
shared reader to non-recursive locker dependency, "X -(SN)-> Y" means
X -> Y and X is a reader (recursive or not) and Y is either a writer or
non-recursive reader.
Note that given two locks, they may have multiple dependencies between them, for example:
Note that given two locks, they may have multiple dependencies between them,
for example::
TASK A:
@ -592,11 +603,11 @@ circles that won't cause deadlocks.
Proof for sufficiency (Lemma 1):
Let's say we have a strong circle:
Let's say we have a strong circle::
L1 -> L2 ... -> Ln -> L1
, which means we have dependencies:
, which means we have dependencies::
L1 -> L2
L2 -> L3
@ -633,7 +644,7 @@ a lock held by P2, and P2 is waiting for a lock held by P3, ... and Pn is waitin
for a lock held by P1. Let's name the lock Px is waiting as Lx, so since P1 is waiting
for L1 and holding Ln, so we will have Ln -> L1 in the dependency graph. Similarly,
we have L1 -> L2, L2 -> L3, ..., Ln-1 -> Ln in the dependency graph, which means we
have a circle:
have a circle::
Ln -> L1 -> L2 -> ... -> Ln

Просмотреть файл

@ -24,7 +24,6 @@ fit into other categories.
isl29003
lis3lv02d
max6875
mic/index
pci-endpoint-test
spear-pcie-gadget
uacce

Просмотреть файл

@ -70,6 +70,7 @@ The ``ice`` driver reports the following versions
that both the name (as reported by ``fw.app.name``) and version are
required to uniquely identify the package.
* - ``fw.app.bundle_id``
- running
- 0xc0000001
- Unique identifier for the DDP package loaded in the device. Also
referred to as the DDP Track ID. Can be used to uniquely identify

Просмотреть файл

@ -10,9 +10,9 @@ Overview / What Is J1939
SAE J1939 defines a higher layer protocol on CAN. It implements a more
sophisticated addressing scheme and extends the maximum packet size above 8
bytes. Several derived specifications exist, which differ from the original
J1939 on the application level, like MilCAN A, NMEA2000 and especially
J1939 on the application level, like MilCAN A, NMEA2000, and especially
ISO-11783 (ISOBUS). This last one specifies the so-called ETP (Extended
Transport Protocol) which is has been included in this implementation. This
Transport Protocol), which has been included in this implementation. This
results in a maximum packet size of ((2 ^ 24) - 1) * 7 bytes == 111 MiB.
Specifications used
@ -32,15 +32,15 @@ sockets, we found some reasons to justify a kernel implementation for the
addressing and transport methods used by J1939.
* **Addressing:** when a process on an ECU communicates via J1939, it should
not necessarily know its source address. Although at least one process per
not necessarily know its source address. Although, at least one process per
ECU should know the source address. Other processes should be able to reuse
that address. This way, address parameters for different processes
cooperating for the same ECU, are not duplicated. This way of working is
closely related to the UNIX concept where programs do just one thing, and do
closely related to the UNIX concept, where programs do just one thing and do
it well.
* **Dynamic addressing:** Address Claiming in J1939 is time critical.
Furthermore data transport should be handled properly during the address
Furthermore, data transport should be handled properly during the address
negotiation. Putting this functionality in the kernel eliminates it as a
requirement for _every_ user space process that communicates via J1939. This
results in a consistent J1939 bus with proper addressing.
@ -58,7 +58,7 @@ Therefore, these parts are left to user space.
The J1939 sockets operate on CAN network devices (see SocketCAN). Any J1939
user space library operating on CAN raw sockets will still operate properly.
Since such library does not communicate with the in-kernel implementation, care
Since such a library does not communicate with the in-kernel implementation, care
must be taken that these two do not interfere. In practice, this means they
cannot share ECU addresses. A single ECU (or virtual ECU) address is used by
the library exclusively, or by the in-kernel system exclusively.
@ -77,13 +77,13 @@ is composed as follows:
8 bits : PS (PDU Specific)
In J1939-21 distinction is made between PDU1 format (where PF < 240) and PDU2
format (where PF >= 240). Furthermore, when using PDU2 format, the PS-field
format (where PF >= 240). Furthermore, when using the PDU2 format, the PS-field
contains a so-called Group Extension, which is part of the PGN. When using PDU2
format, the Group Extension is set in the PS-field.
On the other hand, when using PDU1 format, the PS-field contains a so-called
Destination Address, which is _not_ part of the PGN. When communicating a PGN
from user space to kernel (or visa versa) and PDU2 format is used, the PS-field
from user space to kernel (or vice versa) and PDU2 format is used, the PS-field
of the PGN shall be set to zero. The Destination Address shall be set
elsewhere.
@ -96,15 +96,15 @@ Addressing
Both static and dynamic addressing methods can be used.
For static addresses, no extra checks are made by the kernel, and provided
For static addresses, no extra checks are made by the kernel and provided
addresses are considered right. This responsibility is for the OEM or system
integrator.
For dynamic addressing, so-called Address Claiming, extra support is foreseen
in the kernel. In J1939 any ECU is known by it's 64-bit NAME. At the moment of
in the kernel. In J1939 any ECU is known by its 64-bit NAME. At the moment of
a successful address claim, the kernel keeps track of both NAME and source
address being claimed. This serves as a base for filter schemes. By default,
packets with a destination that is not locally, will be rejected.
packets with a destination that is not locally will be rejected.
Mixed mode packets (from a static to a dynamic address or vice versa) are
allowed. The BSD sockets define separate API calls for getting/setting the
@ -131,31 +131,31 @@ API Calls
---------
On CAN, you first need to open a socket for communicating over a CAN network.
To use J1939, #include <linux/can/j1939.h>. From there, <linux/can.h> will be
To use J1939, ``#include <linux/can/j1939.h>``. From there, ``<linux/can.h>`` will be
included too. To open a socket, use:
.. code-block:: C
s = socket(PF_CAN, SOCK_DGRAM, CAN_J1939);
J1939 does use SOCK_DGRAM sockets. In the J1939 specification, connections are
J1939 does use ``SOCK_DGRAM`` sockets. In the J1939 specification, connections are
mentioned in the context of transport protocol sessions. These still deliver
packets to the other end (using several CAN packets). SOCK_STREAM is not
packets to the other end (using several CAN packets). ``SOCK_STREAM`` is not
supported.
After the successful creation of the socket, you would normally use the bind(2)
and/or connect(2) system call to bind the socket to a CAN interface. After
binding and/or connecting the socket, you can read(2) and write(2) from/to the
socket or use send(2), sendto(2), sendmsg(2) and the recv*() counterpart
After the successful creation of the socket, you would normally use the ``bind(2)``
and/or ``connect(2)`` system call to bind the socket to a CAN interface. After
binding and/or connecting the socket, you can ``read(2)`` and ``write(2)`` from/to the
socket or use ``send(2)``, ``sendto(2)``, ``sendmsg(2)`` and the ``recv*()`` counterpart
operations on the socket as usual. There are also J1939 specific socket options
described below.
In order to send data, a bind(2) must have been successful. bind(2) assigns a
In order to send data, a ``bind(2)`` must have been successful. ``bind(2)`` assigns a
local address to a socket.
Different from CAN is that the payload data is just the data that get send,
without it's header info. The header info is derived from the sockaddr supplied
to bind(2), connect(2), sendto(2) and recvfrom(2). A write(2) with size 4 will
Different from CAN is that the payload data is just the data that get sends,
without its header info. The header info is derived from the sockaddr supplied
to ``bind(2)``, ``connect(2)``, ``sendto(2)`` and ``recvfrom(2)``. A ``write(2)`` with size 4 will
result in a packet with 4 bytes.
The sockaddr structure has extensions for use with J1939 as specified below:
@ -180,47 +180,47 @@ The sockaddr structure has extensions for use with J1939 as specified below:
} can_addr;
}
can_family & can_ifindex serve the same purpose as for other SocketCAN sockets.
``can_family`` & ``can_ifindex`` serve the same purpose as for other SocketCAN sockets.
can_addr.j1939.pgn specifies the PGN (max 0x3ffff). Individual bits are
``can_addr.j1939.pgn`` specifies the PGN (max 0x3ffff). Individual bits are
specified above.
can_addr.j1939.name contains the 64-bit J1939 NAME.
``can_addr.j1939.name`` contains the 64-bit J1939 NAME.
can_addr.j1939.addr contains the address.
``can_addr.j1939.addr`` contains the address.
The bind(2) system call assigns the local address, i.e. the source address when
sending packages. If a PGN during bind(2) is set, it's used as a RX filter.
I.e. only packets with a matching PGN are received. If an ADDR or NAME is set
The ``bind(2)`` system call assigns the local address, i.e. the source address when
sending packages. If a PGN during ``bind(2)`` is set, it's used as a RX filter.
I.e. only packets with a matching PGN are received. If an ADDR or NAME is set
it is used as a receive filter, too. It will match the destination NAME or ADDR
of the incoming packet. The NAME filter will work only if appropriate Address
Claiming for this name was done on the CAN bus and registered/cached by the
kernel.
On the other hand connect(2) assigns the remote address, i.e. the destination
address. The PGN from connect(2) is used as the default PGN when sending
On the other hand ``connect(2)`` assigns the remote address, i.e. the destination
address. The PGN from ``connect(2)`` is used as the default PGN when sending
packets. If ADDR or NAME is set it will be used as the default destination ADDR
or NAME. Further a set ADDR or NAME during connect(2) is used as a receive
or NAME. Further a set ADDR or NAME during ``connect(2)`` is used as a receive
filter. It will match the source NAME or ADDR of the incoming packet.
Both write(2) and send(2) will send a packet with local address from bind(2) and
the remote address from connect(2). Use sendto(2) to overwrite the destination
Both ``write(2)`` and ``send(2)`` will send a packet with local address from ``bind(2)`` and the
remote address from ``connect(2)``. Use ``sendto(2)`` to overwrite the destination
address.
If can_addr.j1939.name is set (!= 0) the NAME is looked up by the kernel and
the corresponding ADDR is used. If can_addr.j1939.name is not set (== 0),
can_addr.j1939.addr is used.
If ``can_addr.j1939.name`` is set (!= 0) the NAME is looked up by the kernel and
the corresponding ADDR is used. If ``can_addr.j1939.name`` is not set (== 0),
``can_addr.j1939.addr`` is used.
When creating a socket, reasonable defaults are set. Some options can be
modified with setsockopt(2) & getsockopt(2).
modified with ``setsockopt(2)`` & ``getsockopt(2)``.
RX path related options:
- SO_J1939_FILTER - configure array of filters
- SO_J1939_PROMISC - disable filters set by bind(2) and connect(2)
- ``SO_J1939_FILTER`` - configure array of filters
- ``SO_J1939_PROMISC`` - disable filters set by ``bind(2)`` and ``connect(2)``
By default no broadcast packets can be send or received. To enable sending or
receiving broadcast packets use the socket option SO_BROADCAST:
receiving broadcast packets use the socket option ``SO_BROADCAST``:
.. code-block:: C
@ -261,26 +261,26 @@ The following diagram illustrates the RX path:
+---------------------------+
TX path related options:
SO_J1939_SEND_PRIO - change default send priority for the socket
``SO_J1939_SEND_PRIO`` - change default send priority for the socket
Message Flags during send() and Related System Calls
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
send(2), sendto(2) and sendmsg(2) take a 'flags' argument. Currently
``send(2)``, ``sendto(2)`` and ``sendmsg(2)`` take a 'flags' argument. Currently
supported flags are:
* MSG_DONTWAIT, i.e. non-blocking operation.
* ``MSG_DONTWAIT``, i.e. non-blocking operation.
recvmsg(2)
^^^^^^^^^^
In most cases recvmsg(2) is needed if you want to extract more information than
recvfrom(2) can provide. For example package priority and timestamp. The
In most cases ``recvmsg(2)`` is needed if you want to extract more information than
``recvfrom(2)`` can provide. For example package priority and timestamp. The
Destination Address, name and packet priority (if applicable) are attached to
the msghdr in the recvmsg(2) call. They can be extracted using cmsg(3) macros,
with cmsg_level == SOL_J1939 && cmsg_type == SCM_J1939_DEST_ADDR,
SCM_J1939_DEST_NAME or SCM_J1939_PRIO. The returned data is a uint8_t for
priority and dst_addr, and uint64_t for dst_name.
the msghdr in the ``recvmsg(2)`` call. They can be extracted using ``cmsg(3)`` macros,
with ``cmsg_level == SOL_J1939 && cmsg_type == SCM_J1939_DEST_ADDR``,
``SCM_J1939_DEST_NAME`` or ``SCM_J1939_PRIO``. The returned data is a ``uint8_t`` for
``priority`` and ``dst_addr``, and ``uint64_t`` for ``dst_name``.
.. code-block:: C
@ -305,12 +305,12 @@ Dynamic Addressing
Distinction has to be made between using the claimed address and doing an
address claim. To use an already claimed address, one has to fill in the
j1939.name member and provide it to bind(2). If the name had claimed an address
``j1939.name`` member and provide it to ``bind(2)``. If the name had claimed an address
earlier, all further messages being sent will use that address. And the
j1939.addr member will be ignored.
``j1939.addr`` member will be ignored.
An exception on this is PGN 0x0ee00. This is the "Address Claim/Cannot Claim
Address" message and the kernel will use the j1939.addr member for that PGN if
Address" message and the kernel will use the ``j1939.addr`` member for that PGN if
necessary.
To claim an address following code example can be used:
@ -371,12 +371,12 @@ NAME can send packets.
If another ECU claims the address, the kernel will mark the NAME-SA expired.
No socket bound to the NAME can send packets (other than address claims). To
claim another address, some socket bound to NAME, must bind(2) again, but with
only j1939.addr changed to the new SA, and must then send a valid address claim
claim another address, some socket bound to NAME, must ``bind(2)`` again, but with
only ``j1939.addr`` changed to the new SA, and must then send a valid address claim
packet. This restarts the state machine in the kernel (and any other
participant on the bus) for this NAME.
can-utils also include the jacd tool, so it can be used as code example or as
``can-utils`` also include the ``j1939acd`` tool, so it can be used as code example or as
default Address Claiming daemon.
Send Examples
@ -403,8 +403,8 @@ Bind:
bind(sock, (struct sockaddr *)&baddr, sizeof(baddr));
Now, the socket 'sock' is bound to the SA 0x20. Since no connect(2) was called,
at this point we can use only sendto(2) or sendmsg(2).
Now, the socket 'sock' is bound to the SA 0x20. Since no ``connect(2)`` was called,
at this point we can use only ``sendto(2)`` or ``sendmsg(2)``.
Send:
@ -414,8 +414,8 @@ Send:
.can_family = AF_CAN,
.can_addr.j1939 = {
.name = J1939_NO_NAME;
.pgn = 0x30,
.addr = 0x12300,
.addr = 0x30,
.pgn = 0x12300,
},
};

Просмотреть файл

@ -175,5 +175,4 @@ The following structures are internal to the kernel, their members are
translated to netlink attributes when dumped. Drivers must not overwrite
the statistics they don't report with 0.
.. kernel-doc:: include/linux/ethtool.h
:identifiers: ethtool_pause_stats
- ethtool_pause_stats()

Просмотреть файл

@ -15,6 +15,14 @@ else:
import re
from itertools import chain
#
# Python 2 lacks re.ASCII...
#
try:
ascii_p3 = re.ASCII
except AttributeError:
ascii_p3 = 0
#
# Regex nastiness. Of course.
# Try to identify "function()" that's not already marked up some
@ -22,22 +30,22 @@ from itertools import chain
# :c:func: block (i.e. ":c:func:`mmap()`s" flakes out), so the last
# bit tries to restrict matches to things that won't create trouble.
#
RE_function = re.compile(r'\b(([a-zA-Z_]\w+)\(\))', flags=re.ASCII)
RE_function = re.compile(r'\b(([a-zA-Z_]\w+)\(\))', flags=ascii_p3)
#
# Sphinx 2 uses the same :c:type role for struct, union, enum and typedef
#
RE_generic_type = re.compile(r'\b(struct|union|enum|typedef)\s+([a-zA-Z_]\w+)',
flags=re.ASCII)
flags=ascii_p3)
#
# Sphinx 3 uses a different C role for each one of struct, union, enum and
# typedef
#
RE_struct = re.compile(r'\b(struct)\s+([a-zA-Z_]\w+)', flags=re.ASCII)
RE_union = re.compile(r'\b(union)\s+([a-zA-Z_]\w+)', flags=re.ASCII)
RE_enum = re.compile(r'\b(enum)\s+([a-zA-Z_]\w+)', flags=re.ASCII)
RE_typedef = re.compile(r'\b(typedef)\s+([a-zA-Z_]\w+)', flags=re.ASCII)
RE_struct = re.compile(r'\b(struct)\s+([a-zA-Z_]\w+)', flags=ascii_p3)
RE_union = re.compile(r'\b(union)\s+([a-zA-Z_]\w+)', flags=ascii_p3)
RE_enum = re.compile(r'\b(enum)\s+([a-zA-Z_]\w+)', flags=ascii_p3)
RE_typedef = re.compile(r'\b(typedef)\s+([a-zA-Z_]\w+)', flags=ascii_p3)
#
# Detects a reference to a documentation page of the form Documentation/... with

Просмотреть файл

@ -22,6 +22,7 @@ place where this information is gathered.
spec_ctrl
accelerators/ocxl
ioctl/index
iommu
media/index
.. only:: subproject and html

Просмотреть файл

@ -934,7 +934,7 @@ M: Evan Quan <evan.quan@amd.com>
L: amd-gfx@lists.freedesktop.org
S: Supported
T: git git://people.freedesktop.org/~agd5f/linux
F: drivers/gpu/drm/amd/powerplay/
F: drivers/gpu/drm/amd/pm/powerplay/
AMD SEATTLE DEVICE TREE SUPPORT
M: Brijesh Singh <brijeshkumar.singh@amd.com>
@ -978,7 +978,7 @@ M: Michael Hennerich <Michael.Hennerich@analog.com>
L: linux-iio@vger.kernel.org
S: Supported
W: http://ez.analog.com/community/linux-device-drivers
F: Documentation/devicetree/bindings/iio/adc/adi,ad7768-1.txt
F: Documentation/devicetree/bindings/iio/adc/adi,ad7768-1.yaml
F: drivers/iio/adc/ad7768-1.c
ANALOG DEVICES INC AD7780 DRIVER
@ -3857,7 +3857,7 @@ M: Roger Quadros <rogerq@ti.com>
L: linux-usb@vger.kernel.org
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/peter.chen/usb.git
F: Documentation/devicetree/bindings/usb/cdns-usb3.txt
F: Documentation/devicetree/bindings/usb/cdns,usb3.yaml
F: drivers/usb/cdns3/
CADET FM/AM RADIO RECEIVER DRIVER
@ -7923,7 +7923,7 @@ HISILICON LPC BUS DRIVER
M: john.garry@huawei.com
S: Maintained
W: http://www.hisilicon.com
F: Documentation/devicetree/bindings/arm/hisilicon/hisilicon-low-pin-count.txt
F: Documentation/devicetree/bindings/arm/hisilicon/low-pin-count.yaml
F: drivers/bus/hisi_lpc.c
HISILICON NETWORK SUBSYSTEM 3 DRIVER (HNS3)
@ -11177,7 +11177,7 @@ F: Documentation/devicetree/bindings/input/touchscreen/melfas_mip4.txt
F: drivers/input/touchscreen/melfas_mip4.c
MELLANOX BLUEFIELD I2C DRIVER
M: Khalil Blaiech <kblaiech@mellanox.com>
M: Khalil Blaiech <kblaiech@nvidia.com>
L: linux-i2c@vger.kernel.org
S: Supported
F: drivers/i2c/busses/i2c-mlxbf.c
@ -14541,6 +14541,14 @@ F: Documentation/devicetree/bindings/mailbox/qcom-ipcc.yaml
F: drivers/mailbox/qcom-ipcc.c
F: include/dt-bindings/mailbox/qcom-ipcc.h
QUALCOMM IPQ4019 VQMMC REGULATOR DRIVER
M: Robert Marko <robert.marko@sartura.hr>
M: Luka Perkov <luka.perkov@sartura.hr>
L: linux-arm-msm@vger.kernel.org
S: Maintained
F: Documentation/devicetree/bindings/regulator/vqmmc-ipq4019-regulator.yaml
F: drivers/regulator/vqmmc-ipq4019-regulator.c
QUALCOMM RMNET DRIVER
M: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
M: Sean Tranchetti <stranche@codeaurora.org>
@ -14896,7 +14904,6 @@ RENESAS ETHERNET DRIVERS
R: Sergei Shtylyov <sergei.shtylyov@gmail.com>
L: netdev@vger.kernel.org
L: linux-renesas-soc@vger.kernel.org
F: Documentation/devicetree/bindings/net/renesas,*.txt
F: Documentation/devicetree/bindings/net/renesas,*.yaml
F: drivers/net/ethernet/renesas/
F: include/linux/sh_eth.h
@ -18097,7 +18104,7 @@ M: Yu Chen <chenyu56@huawei.com>
M: Binghui Wang <wangbinghui@hisilicon.com>
L: linux-usb@vger.kernel.org
S: Maintained
F: Documentation/devicetree/bindings/phy/phy-hi3660-usb3.txt
F: Documentation/devicetree/bindings/phy/hisilicon,hi3660-usb3.yaml
F: drivers/phy/hisilicon/phy-hi3660-usb3.c
USB ISP116X DRIVER

Просмотреть файл

@ -2,7 +2,7 @@
VERSION = 5
PATCHLEVEL = 10
SUBLEVEL = 0
EXTRAVERSION = -rc2
EXTRAVERSION = -rc3
NAME = Kleptomaniac Octopus
# *DOCUMENTATION*

Просмотреть файл

@ -67,7 +67,22 @@
sr r5, [ARC_REG_LPB_CTRL]
1:
#endif /* CONFIG_ARC_LPB_DISABLE */
#endif
/* On HSDK, CCMs need to remapped super early */
#ifdef CONFIG_ARC_SOC_HSDK
mov r6, 0x60000000
lr r5, [ARC_REG_ICCM_BUILD]
breq r5, 0, 1f
sr r6, [ARC_REG_AUX_ICCM]
1:
lr r5, [ARC_REG_DCCM_BUILD]
breq r5, 0, 2f
sr r6, [ARC_REG_AUX_DCCM]
2:
#endif /* CONFIG_ARC_SOC_HSDK */
#endif /* CONFIG_ISA_ARCV2 */
; Config DSP_CTRL properly, so kernel may use integer multiply,
; multiply-accumulate, and divide operations
DSP_EARLY_INIT

Просмотреть файл

@ -112,7 +112,7 @@ arc_unwind_core(struct task_struct *tsk, struct pt_regs *regs,
int (*consumer_fn) (unsigned int, void *), void *arg)
{
#ifdef CONFIG_ARC_DW2_UNWIND
int ret = 0;
int ret = 0, cnt = 0;
unsigned int address;
struct unwind_frame_info frame_info;
@ -132,6 +132,11 @@ arc_unwind_core(struct task_struct *tsk, struct pt_regs *regs,
break;
frame_info.regs.r63 = frame_info.regs.r31;
if (cnt++ > 128) {
printk("unwinder looping too long, aborting !\n");
return 0;
}
}
return address; /* return the last address it saw */

Просмотреть файл

@ -17,22 +17,6 @@ int arc_hsdk_axi_dmac_coherent __section(".data") = 0;
#define ARC_CCM_UNUSED_ADDR 0x60000000
static void __init hsdk_init_per_cpu(unsigned int cpu)
{
/*
* By default ICCM is mapped to 0x7z while this area is used for
* kernel virtual mappings, so move it to currently unused area.
*/
if (cpuinfo_arc700[cpu].iccm.sz)
write_aux_reg(ARC_REG_AUX_ICCM, ARC_CCM_UNUSED_ADDR);
/*
* By default DCCM is mapped to 0x8z while this area is used by kernel,
* so move it to currently unused area.
*/
if (cpuinfo_arc700[cpu].dccm.sz)
write_aux_reg(ARC_REG_AUX_DCCM, ARC_CCM_UNUSED_ADDR);
}
#define ARC_PERIPHERAL_BASE 0xf0000000
#define CREG_BASE (ARC_PERIPHERAL_BASE + 0x1000)
@ -339,5 +323,4 @@ static const char *hsdk_compat[] __initconst = {
MACHINE_START(SIMULATION, "hsdk")
.dt_compat = hsdk_compat,
.init_early = hsdk_init_early,
.init_per_cpu = hsdk_init_per_cpu,
MACHINE_END

Просмотреть файл

@ -354,8 +354,8 @@ static void __init free_highpages(void)
/* set highmem page free */
for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE,
&range_start, &range_end, NULL) {
unsigned long start = PHYS_PFN(range_start);
unsigned long end = PHYS_PFN(range_end);
unsigned long start = PFN_UP(range_start);
unsigned long end = PFN_DOWN(range_end);
/* Ignore complete lowmem entries */
if (end <= max_low)

Просмотреть файл

@ -1002,7 +1002,7 @@ config NUMA
config NODES_SHIFT
int "Maximum NUMA Nodes (as a power of 2)"
range 1 10
default "2"
default "4"
depends on NEED_MULTIPLE_NODES
help
Specify the maximum number of NUMA Nodes available on the target

Просмотреть файл

@ -10,6 +10,7 @@
* #imm16 values used for BRK instruction generation
* 0x004: for installing kprobes
* 0x005: for installing uprobes
* 0x006: for kprobe software single-step
* Allowed values for kgdb are 0x400 - 0x7ff
* 0x100: for triggering a fault on purpose (reserved)
* 0x400: for dynamic BRK instruction
@ -19,6 +20,7 @@
*/
#define KPROBES_BRK_IMM 0x004
#define UPROBES_BRK_IMM 0x005
#define KPROBES_BRK_SS_IMM 0x006
#define FAULT_BRK_IMM 0x100
#define KGDB_DYN_DBG_BRK_IMM 0x400
#define KGDB_COMPILED_DBG_BRK_IMM 0x401

Просмотреть файл

@ -53,6 +53,7 @@
/* kprobes BRK opcodes with ESR encoding */
#define BRK64_OPCODE_KPROBES (AARCH64_BREAK_MON | (KPROBES_BRK_IMM << 5))
#define BRK64_OPCODE_KPROBES_SS (AARCH64_BREAK_MON | (KPROBES_BRK_SS_IMM << 5))
/* uprobes BRK opcodes with ESR encoding */
#define BRK64_OPCODE_UPROBES (AARCH64_BREAK_MON | (UPROBES_BRK_IMM << 5))

Просмотреть файл

@ -16,7 +16,7 @@
#include <linux/percpu.h>
#define __ARCH_WANT_KPROBES_INSN_SLOT
#define MAX_INSN_SIZE 1
#define MAX_INSN_SIZE 2
#define flush_insn_slot(p) do { } while (0)
#define kretprobe_blacklist_size 0

Просмотреть файл

@ -43,7 +43,7 @@ static void *image_load(struct kimage *image,
u64 flags, value;
bool be_image, be_kernel;
struct kexec_buf kbuf;
unsigned long text_offset;
unsigned long text_offset, kernel_segment_number;
struct kexec_segment *kernel_segment;
int ret;
@ -88,11 +88,37 @@ static void *image_load(struct kimage *image,
/* Adjust kernel segment with TEXT_OFFSET */
kbuf.memsz += text_offset;
ret = kexec_add_buffer(&kbuf);
if (ret)
return ERR_PTR(ret);
kernel_segment_number = image->nr_segments;
kernel_segment = &image->segment[image->nr_segments - 1];
/*
* The location of the kernel segment may make it impossible to satisfy
* the other segment requirements, so we try repeatedly to find a
* location that will work.
*/
while ((ret = kexec_add_buffer(&kbuf)) == 0) {
/* Try to load additional data */
kernel_segment = &image->segment[kernel_segment_number];
ret = load_other_segments(image, kernel_segment->mem,
kernel_segment->memsz, initrd,
initrd_len, cmdline);
if (!ret)
break;
/*
* We couldn't find space for the other segments; erase the
* kernel segment and try the next available hole.
*/
image->nr_segments -= 1;
kbuf.buf_min = kernel_segment->mem + kernel_segment->memsz;
kbuf.mem = KEXEC_BUF_MEM_UNKNOWN;
}
if (ret) {
pr_err("Could not find any suitable kernel location!");
return ERR_PTR(ret);
}
kernel_segment = &image->segment[kernel_segment_number];
kernel_segment->mem += text_offset;
kernel_segment->memsz -= text_offset;
image->start = kernel_segment->mem;
@ -101,12 +127,7 @@ static void *image_load(struct kimage *image,
kernel_segment->mem, kbuf.bufsz,
kernel_segment->memsz);
/* Load additional data */
ret = load_other_segments(image,
kernel_segment->mem, kernel_segment->memsz,
initrd, initrd_len, cmdline);
return ERR_PTR(ret);
return 0;
}
#ifdef CONFIG_KEXEC_IMAGE_VERIFY_SIG

Просмотреть файл

@ -240,6 +240,11 @@ static int prepare_elf_headers(void **addr, unsigned long *sz)
return ret;
}
/*
* Tries to add the initrd and DTB to the image. If it is not possible to find
* valid locations, this function will undo changes to the image and return non
* zero.
*/
int load_other_segments(struct kimage *image,
unsigned long kernel_load_addr,
unsigned long kernel_size,
@ -248,7 +253,8 @@ int load_other_segments(struct kimage *image,
{
struct kexec_buf kbuf;
void *headers, *dtb = NULL;
unsigned long headers_sz, initrd_load_addr = 0, dtb_len;
unsigned long headers_sz, initrd_load_addr = 0, dtb_len,
orig_segments = image->nr_segments;
int ret = 0;
kbuf.image = image;
@ -334,6 +340,7 @@ int load_other_segments(struct kimage *image,
return 0;
out_err:
image->nr_segments = orig_segments;
vfree(dtb);
return ret;
}

Просмотреть файл

@ -36,25 +36,16 @@ DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
static void __kprobes
post_kprobe_handler(struct kprobe_ctlblk *, struct pt_regs *);
static int __kprobes patch_text(kprobe_opcode_t *addr, u32 opcode)
{
void *addrs[1];
u32 insns[1];
addrs[0] = addr;
insns[0] = opcode;
return aarch64_insn_patch_text(addrs, insns, 1);
}
static void __kprobes arch_prepare_ss_slot(struct kprobe *p)
{
/* prepare insn slot */
patch_text(p->ainsn.api.insn, p->opcode);
kprobe_opcode_t *addr = p->ainsn.api.insn;
void *addrs[] = {addr, addr + 1};
u32 insns[] = {p->opcode, BRK64_OPCODE_KPROBES_SS};
flush_icache_range((uintptr_t) (p->ainsn.api.insn),
(uintptr_t) (p->ainsn.api.insn) +
MAX_INSN_SIZE * sizeof(kprobe_opcode_t));
/* prepare insn slot */
aarch64_insn_patch_text(addrs, insns, 2);
flush_icache_range((uintptr_t)addr, (uintptr_t)(addr + MAX_INSN_SIZE));
/*
* Needs restoring of return address after stepping xol.
@ -128,13 +119,18 @@ void *alloc_insn_page(void)
/* arm kprobe: install breakpoint in text */
void __kprobes arch_arm_kprobe(struct kprobe *p)
{
patch_text(p->addr, BRK64_OPCODE_KPROBES);
void *addr = p->addr;
u32 insn = BRK64_OPCODE_KPROBES;
aarch64_insn_patch_text(&addr, &insn, 1);
}
/* disarm kprobe: remove breakpoint from text */
void __kprobes arch_disarm_kprobe(struct kprobe *p)
{
patch_text(p->addr, p->opcode);
void *addr = p->addr;
aarch64_insn_patch_text(&addr, &p->opcode, 1);
}
void __kprobes arch_remove_kprobe(struct kprobe *p)
@ -163,20 +159,15 @@ static void __kprobes set_current_kprobe(struct kprobe *p)
}
/*
* Interrupts need to be disabled before single-step mode is set, and not
* reenabled until after single-step mode ends.
* Without disabling interrupt on local CPU, there is a chance of
* interrupt occurrence in the period of exception return and start of
* out-of-line single-step, that result in wrongly single stepping
* into the interrupt handler.
* Mask all of DAIF while executing the instruction out-of-line, to keep things
* simple and avoid nesting exceptions. Interrupts do have to be disabled since
* the kprobe state is per-CPU and doesn't get migrated.
*/
static void __kprobes kprobes_save_local_irqflag(struct kprobe_ctlblk *kcb,
struct pt_regs *regs)
{
kcb->saved_irqflag = regs->pstate & DAIF_MASK;
regs->pstate |= PSR_I_BIT;
/* Unmask PSTATE.D for enabling software step exceptions. */
regs->pstate &= ~PSR_D_BIT;
regs->pstate |= DAIF_MASK;
}
static void __kprobes kprobes_restore_local_irqflag(struct kprobe_ctlblk *kcb,
@ -219,10 +210,7 @@ static void __kprobes setup_singlestep(struct kprobe *p,
slot = (unsigned long)p->ainsn.api.insn;
set_ss_context(kcb, slot); /* mark pending ss */
/* IRQs and single stepping do not mix well. */
kprobes_save_local_irqflag(kcb, regs);
kernel_enable_single_step(regs);
instruction_pointer_set(regs, slot);
} else {
/* insn simulation */
@ -273,12 +261,8 @@ post_kprobe_handler(struct kprobe_ctlblk *kcb, struct pt_regs *regs)
}
/* call post handler */
kcb->kprobe_status = KPROBE_HIT_SSDONE;
if (cur->post_handler) {
/* post_handler can hit breakpoint and single step
* again, so we enable D-flag for recursive exception.
*/
if (cur->post_handler)
cur->post_handler(cur, regs, 0);
}
reset_current_kprobe();
}
@ -302,8 +286,6 @@ int __kprobes kprobe_fault_handler(struct pt_regs *regs, unsigned int fsr)
if (!instruction_pointer(regs))
BUG();
kernel_disable_single_step();
if (kcb->kprobe_status == KPROBE_REENTER)
restore_previous_kprobe(kcb);
else
@ -365,10 +347,6 @@ static void __kprobes kprobe_handler(struct pt_regs *regs)
* pre-handler and it returned non-zero, it will
* modify the execution path and no need to single
* stepping. Let's just reset current kprobe and exit.
*
* pre_handler can hit a breakpoint and can step thru
* before return, keep PSTATE D-flag enabled until
* pre_handler return back.
*/
if (!p->pre_handler || !p->pre_handler(p, regs)) {
setup_singlestep(p, regs, kcb, 0);
@ -399,7 +377,7 @@ kprobe_ss_hit(struct kprobe_ctlblk *kcb, unsigned long addr)
}
static int __kprobes
kprobe_single_step_handler(struct pt_regs *regs, unsigned int esr)
kprobe_breakpoint_ss_handler(struct pt_regs *regs, unsigned int esr)
{
struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
int retval;
@ -409,16 +387,15 @@ kprobe_single_step_handler(struct pt_regs *regs, unsigned int esr)
if (retval == DBG_HOOK_HANDLED) {
kprobes_restore_local_irqflag(kcb, regs);
kernel_disable_single_step();
post_kprobe_handler(kcb, regs);
}
return retval;
}
static struct step_hook kprobes_step_hook = {
.fn = kprobe_single_step_handler,
static struct break_hook kprobes_break_ss_hook = {
.imm = KPROBES_BRK_SS_IMM,
.fn = kprobe_breakpoint_ss_handler,
};
static int __kprobes
@ -486,7 +463,7 @@ int __kprobes arch_trampoline_kprobe(struct kprobe *p)
int __init arch_init_kprobes(void)
{
register_kernel_break_hook(&kprobes_break_hook);
register_kernel_step_hook(&kprobes_step_hook);
register_kernel_break_hook(&kprobes_break_ss_hook);
return 0;
}

Просмотреть файл

@ -63,7 +63,7 @@ static inline void restore_user_access(unsigned long flags)
static inline bool
bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
{
return WARN(!((regs->kuap ^ MD_APG_KUAP) & 0xf0000000),
return WARN(!((regs->kuap ^ MD_APG_KUAP) & 0xff000000),
"Bug: fault blocked by AP register !");
}

Просмотреть файл

@ -33,19 +33,18 @@
* respectively NA for All or X for Supervisor and no access for User.
* Then we use the APG to say whether accesses are according to Page rules or
* "all Supervisor" rules (Access to all)
* Therefore, we define 2 APG groups. lsb is _PMD_USER
* 0 => Kernel => 01 (all accesses performed according to page definition)
* 1 => User => 00 (all accesses performed as supervisor iaw page definition)
* 2-15 => Not Used
* _PAGE_ACCESSED is also managed via APG. When _PAGE_ACCESSED is not set, say
* "all User" rules, that will lead to NA for all.
* Therefore, we define 4 APG groups. lsb is _PAGE_ACCESSED
* 0 => Kernel => 11 (all accesses performed according as user iaw page definition)
* 1 => Kernel+Accessed => 01 (all accesses performed according to page definition)
* 2 => User => 11 (all accesses performed according as user iaw page definition)
* 3 => User+Accessed => 00 (all accesses performed as supervisor iaw page definition) for INIT
* => 10 (all accesses performed according to swaped page definition) for KUEP
* 4-15 => Not Used
*/
#define MI_APG_INIT 0x40000000
/*
* 0 => Kernel => 01 (all accesses performed according to page definition)
* 1 => User => 10 (all accesses performed according to swaped page definition)
* 2-15 => Not Used
*/
#define MI_APG_KUEP 0x60000000
#define MI_APG_INIT 0xdc000000
#define MI_APG_KUEP 0xde000000
/* The effective page number register. When read, contains the information
* about the last instruction TLB miss. When MI_RPN is written, bits in
@ -106,25 +105,9 @@
#define MD_Ks 0x80000000 /* Should not be set */
#define MD_Kp 0x40000000 /* Should always be set */
/*
* All pages' PP data bits are set to either 000 or 011 or 001, which means
* respectively RW for Supervisor and no access for User, or RO for
* Supervisor and no access for user and NA for ALL.
* Then we use the APG to say whether accesses are according to Page rules or
* "all Supervisor" rules (Access to all)
* Therefore, we define 2 APG groups. lsb is _PMD_USER
* 0 => Kernel => 01 (all accesses performed according to page definition)
* 1 => User => 00 (all accesses performed as supervisor iaw page definition)
* 2-15 => Not Used
*/
#define MD_APG_INIT 0x40000000
/*
* 0 => No user => 01 (all accesses performed according to page definition)
* 1 => User => 10 (all accesses performed according to swaped page definition)
* 2-15 => Not Used
*/
#define MD_APG_KUAP 0x60000000
/* See explanation above at the definition of MI_APG_INIT */
#define MD_APG_INIT 0xdc000000
#define MD_APG_KUAP 0xde000000
/* The effective page number register. When read, contains the information
* about the last instruction TLB miss. When MD_RPN is written, bits in

Просмотреть файл

@ -39,9 +39,9 @@
* into the TLB.
*/
#define _PAGE_GUARDED 0x0010 /* Copied to L1 G entry in DTLB */
#define _PAGE_SPECIAL 0x0020 /* SW entry */
#define _PAGE_ACCESSED 0x0020 /* Copied to L1 APG 1 entry in I/DTLB */
#define _PAGE_EXEC 0x0040 /* Copied to PP (bit 21) in ITLB */
#define _PAGE_ACCESSED 0x0080 /* software: page referenced */
#define _PAGE_SPECIAL 0x0080 /* SW entry */
#define _PAGE_NA 0x0200 /* Supervisor NA, User no access */
#define _PAGE_RO 0x0600 /* Supervisor RO, User no access */
@ -59,11 +59,12 @@
#define _PMD_PRESENT 0x0001
#define _PMD_PRESENT_MASK _PMD_PRESENT
#define _PMD_BAD 0x0fd0
#define _PMD_BAD 0x0f90
#define _PMD_PAGE_MASK 0x000c
#define _PMD_PAGE_8M 0x000c
#define _PMD_PAGE_512K 0x0004
#define _PMD_USER 0x0020 /* APG 1 */
#define _PMD_ACCESSED 0x0020 /* APG 1 */
#define _PMD_USER 0x0040 /* APG 2 */
#define _PTE_NONE_MASK 0

Просмотреть файл

@ -6,6 +6,7 @@
struct device;
struct device_node;
struct drmem_lmb;
#ifdef CONFIG_NUMA
@ -61,6 +62,9 @@ static inline int early_cpu_to_node(int cpu)
*/
return (nid < 0) ? 0 : nid;
}
int of_drconf_to_nid_single(struct drmem_lmb *lmb);
#else
static inline int early_cpu_to_node(int cpu) { return 0; }
@ -84,10 +88,12 @@ static inline int cpu_distance(__be32 *cpu1_assoc, __be32 *cpu2_assoc)
return 0;
}
#endif /* CONFIG_NUMA */
static inline int of_drconf_to_nid_single(struct drmem_lmb *lmb)
{
return first_online_node;
}
struct drmem_lmb;
int of_drconf_to_nid_single(struct drmem_lmb *lmb);
#endif /* CONFIG_NUMA */
#if defined(CONFIG_NUMA) && defined(CONFIG_PPC_SPLPAR)
extern int find_and_online_cpu_nid(int cpu);

Просмотреть файл

@ -178,7 +178,7 @@ do { \
* are no aliasing issues.
*/
#define __put_user_asm_goto(x, addr, label, op) \
asm volatile goto( \
asm_volatile_goto( \
"1: " op "%U1%X1 %0,%1 # put_user\n" \
EX_TABLE(1b, %l2) \
: \
@ -191,7 +191,7 @@ do { \
__put_user_asm_goto(x, ptr, label, "std")
#else /* __powerpc64__ */
#define __put_user_asm2_goto(x, addr, label) \
asm volatile goto( \
asm_volatile_goto( \
"1: stw%X1 %0, %1\n" \
"2: stw%X1 %L0, %L1\n" \
EX_TABLE(1b, %l2) \

Просмотреть файл

@ -264,8 +264,9 @@ static int eeh_addr_cache_show(struct seq_file *s, void *v)
{
struct pci_io_addr_range *piar;
struct rb_node *n;
unsigned long flags;
spin_lock(&pci_io_addr_cache_root.piar_lock);
spin_lock_irqsave(&pci_io_addr_cache_root.piar_lock, flags);
for (n = rb_first(&pci_io_addr_cache_root.rb_root); n; n = rb_next(n)) {
piar = rb_entry(n, struct pci_io_addr_range, rb_node);
@ -273,7 +274,7 @@ static int eeh_addr_cache_show(struct seq_file *s, void *v)
(piar->flags & IORESOURCE_IO) ? "i/o" : "mem",
&piar->addr_lo, &piar->addr_hi, pci_name(piar->pcidev));
}
spin_unlock(&pci_io_addr_cache_root.piar_lock);
spin_unlock_irqrestore(&pci_io_addr_cache_root.piar_lock, flags);
return 0;
}

Просмотреть файл

@ -284,11 +284,7 @@ _ENTRY(saved_ksp_limit)
rlwimi r11, r10, 22, 20, 29 /* Compute PTE address */
lwz r11, 0(r11) /* Get Linux PTE */
#ifdef CONFIG_SWAP
li r9, _PAGE_PRESENT | _PAGE_ACCESSED
#else
li r9, _PAGE_PRESENT
#endif
andc. r9, r9, r11 /* Check permission */
bne 5f
@ -369,11 +365,7 @@ _ENTRY(saved_ksp_limit)
rlwimi r11, r10, 22, 20, 29 /* Compute PTE address */
lwz r11, 0(r11) /* Get Linux PTE */
#ifdef CONFIG_SWAP
li r9, _PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_EXEC
#else
li r9, _PAGE_PRESENT | _PAGE_EXEC
#endif
andc. r9, r9, r11 /* Check permission */
bne 5f

Просмотреть файл

@ -202,9 +202,7 @@ SystemCall:
InstructionTLBMiss:
mtspr SPRN_SPRG_SCRATCH0, r10
#if defined(ITLB_MISS_KERNEL) || defined(CONFIG_SWAP) || defined(CONFIG_HUGETLBFS)
mtspr SPRN_SPRG_SCRATCH1, r11
#endif
/* If we are faulting a kernel address, we have to use the
* kernel page tables.
@ -224,25 +222,13 @@ InstructionTLBMiss:
3:
mtcr r11
#endif
#if defined(CONFIG_HUGETLBFS) || !defined(CONFIG_PIN_TLB_TEXT)
lwz r11, (swapper_pg_dir-PAGE_OFFSET)@l(r10) /* Get level 1 entry */
mtspr SPRN_MD_TWC, r11
#else
lwz r10, (swapper_pg_dir-PAGE_OFFSET)@l(r10) /* Get level 1 entry */
mtspr SPRN_MI_TWC, r10 /* Set segment attributes */
mtspr SPRN_MD_TWC, r10
#endif
mfspr r10, SPRN_MD_TWC
lwz r10, 0(r10) /* Get the pte */
#if defined(CONFIG_HUGETLBFS) || !defined(CONFIG_PIN_TLB_TEXT)
rlwimi r11, r10, 0, _PAGE_GUARDED | _PAGE_ACCESSED
rlwimi r11, r10, 32 - 9, _PMD_PAGE_512K
mtspr SPRN_MI_TWC, r11
#endif
#ifdef CONFIG_SWAP
rlwinm r11, r10, 32-5, _PAGE_PRESENT
and r11, r11, r10
rlwimi r10, r11, 0, _PAGE_PRESENT
#endif
/* The Linux PTE won't go exactly into the MMU TLB.
* Software indicator bits 20 and 23 must be clear.
* Software indicator bits 22, 24, 25, 26, and 27 must be
@ -256,9 +242,7 @@ InstructionTLBMiss:
/* Restore registers */
0: mfspr r10, SPRN_SPRG_SCRATCH0
#if defined(ITLB_MISS_KERNEL) || defined(CONFIG_SWAP) || defined(CONFIG_HUGETLBFS)
mfspr r11, SPRN_SPRG_SCRATCH1
#endif
rfi
patch_site 0b, patch__itlbmiss_exit_1
@ -268,9 +252,7 @@ InstructionTLBMiss:
addi r10, r10, 1
stw r10, (itlb_miss_counter - PAGE_OFFSET)@l(0)
mfspr r10, SPRN_SPRG_SCRATCH0
#if defined(ITLB_MISS_KERNEL) || defined(CONFIG_SWAP)
mfspr r11, SPRN_SPRG_SCRATCH1
#endif
rfi
#endif
@ -297,30 +279,16 @@ DataStoreTLBMiss:
mfspr r10, SPRN_MD_TWC
lwz r10, 0(r10) /* Get the pte */
/* Insert the Guarded flag into the TWC from the Linux PTE.
/* Insert Guarded and Accessed flags into the TWC from the Linux PTE.
* It is bit 27 of both the Linux PTE and the TWC (at least
* I got that right :-). It will be better when we can put
* this into the Linux pgd/pmd and load it in the operation
* above.
*/
rlwimi r11, r10, 0, _PAGE_GUARDED
rlwimi r11, r10, 0, _PAGE_GUARDED | _PAGE_ACCESSED
rlwimi r11, r10, 32 - 9, _PMD_PAGE_512K
mtspr SPRN_MD_TWC, r11
/* Both _PAGE_ACCESSED and _PAGE_PRESENT has to be set.
* We also need to know if the insn is a load/store, so:
* Clear _PAGE_PRESENT and load that which will
* trap into DTLB Error with store bit set accordinly.
*/
/* PRESENT=0x1, ACCESSED=0x20
* r11 = ((r10 & PRESENT) & ((r10 & ACCESSED) >> 5));
* r10 = (r10 & ~PRESENT) | r11;
*/
#ifdef CONFIG_SWAP
rlwinm r11, r10, 32-5, _PAGE_PRESENT
and r11, r11, r10
rlwimi r10, r11, 0, _PAGE_PRESENT
#endif
/* The Linux PTE won't go exactly into the MMU TLB.
* Software indicator bits 24, 25, 26, and 27 must be
* set. All other Linux PTE bits control the behavior
@ -711,7 +679,7 @@ initial_mmu:
li r9, 4 /* up to 4 pages of 8M */
mtctr r9
lis r9, KERNELBASE@h /* Create vaddr for TLB */
li r10, MI_PS8MEG | MI_SVALID /* Set 8M byte page */
li r10, MI_PS8MEG | _PMD_ACCESSED | MI_SVALID
li r11, MI_BOOTINIT /* Create RPN for address 0 */
1:
mtspr SPRN_MI_CTR, r8 /* Set instruction MMU control */
@ -775,7 +743,7 @@ _GLOBAL(mmu_pin_tlb)
#ifdef CONFIG_PIN_TLB_TEXT
LOAD_REG_IMMEDIATE(r5, 28 << 8)
LOAD_REG_IMMEDIATE(r6, PAGE_OFFSET)
LOAD_REG_IMMEDIATE(r7, MI_SVALID | MI_PS8MEG)
LOAD_REG_IMMEDIATE(r7, MI_SVALID | MI_PS8MEG | _PMD_ACCESSED)
LOAD_REG_IMMEDIATE(r8, 0xf0 | _PAGE_RO | _PAGE_SPS | _PAGE_SH | _PAGE_PRESENT)
LOAD_REG_ADDR(r9, _sinittext)
li r0, 4
@ -797,7 +765,7 @@ _GLOBAL(mmu_pin_tlb)
LOAD_REG_IMMEDIATE(r5, 28 << 8 | MD_TWAM)
#ifdef CONFIG_PIN_TLB_DATA
LOAD_REG_IMMEDIATE(r6, PAGE_OFFSET)
LOAD_REG_IMMEDIATE(r7, MI_SVALID | MI_PS8MEG)
LOAD_REG_IMMEDIATE(r7, MI_SVALID | MI_PS8MEG | _PMD_ACCESSED)
#ifdef CONFIG_PIN_TLB_IMMR
li r0, 3
#else
@ -834,7 +802,7 @@ _GLOBAL(mmu_pin_tlb)
#endif
#ifdef CONFIG_PIN_TLB_IMMR
LOAD_REG_IMMEDIATE(r0, VIRT_IMMR_BASE | MD_EVALID)
LOAD_REG_IMMEDIATE(r7, MD_SVALID | MD_PS512K | MD_GUARDED)
LOAD_REG_IMMEDIATE(r7, MD_SVALID | MD_PS512K | MD_GUARDED | _PMD_ACCESSED)
mfspr r8, SPRN_IMMR
rlwinm r8, r8, 0, 0xfff80000
ori r8, r8, 0xf0 | _PAGE_DIRTY | _PAGE_SPS | _PAGE_SH | \

Просмотреть файл

@ -457,11 +457,7 @@ InstructionTLBMiss:
cmplw 0,r1,r3
#endif
mfspr r2, SPRN_SPRG_PGDIR
#ifdef CONFIG_SWAP
li r1,_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_EXEC
#else
li r1,_PAGE_PRESENT | _PAGE_EXEC
#endif
#if defined(CONFIG_MODULES) || defined(CONFIG_DEBUG_PAGEALLOC)
bgt- 112f
lis r2, (swapper_pg_dir - PAGE_OFFSET)@ha /* if kernel address, use */
@ -523,11 +519,7 @@ DataLoadTLBMiss:
lis r1, TASK_SIZE@h /* check if kernel address */
cmplw 0,r1,r3
mfspr r2, SPRN_SPRG_PGDIR
#ifdef CONFIG_SWAP
li r1, _PAGE_PRESENT | _PAGE_ACCESSED
#else
li r1, _PAGE_PRESENT
#endif
bgt- 112f
lis r2, (swapper_pg_dir - PAGE_OFFSET)@ha /* if kernel address, use */
addi r2, r2, (swapper_pg_dir - PAGE_OFFSET)@l /* kernel page table */
@ -603,11 +595,7 @@ DataStoreTLBMiss:
lis r1, TASK_SIZE@h /* check if kernel address */
cmplw 0,r1,r3
mfspr r2, SPRN_SPRG_PGDIR
#ifdef CONFIG_SWAP
li r1, _PAGE_RW | _PAGE_DIRTY | _PAGE_PRESENT | _PAGE_ACCESSED
#else
li r1, _PAGE_RW | _PAGE_DIRTY | _PAGE_PRESENT
#endif
bgt- 112f
lis r2, (swapper_pg_dir - PAGE_OFFSET)@ha /* if kernel address, use */
addi r2, r2, (swapper_pg_dir - PAGE_OFFSET)@l /* kernel page table */

Просмотреть файл

@ -1393,13 +1393,14 @@ static void add_cpu_to_masks(int cpu)
/* Activate a secondary processor. */
void start_secondary(void *unused)
{
unsigned int cpu = smp_processor_id();
unsigned int cpu = raw_smp_processor_id();
mmgrab(&init_mm);
current->active_mm = &init_mm;
smp_store_cpu_info(cpu);
set_dec(tb_ticks_per_jiffy);
rcu_cpu_starting(cpu);
preempt_disable();
cpu_callin_map[cpu] = 1;

Просмотреть файл

@ -476,7 +476,7 @@ do { \
do { \
long __kr_err; \
\
__put_user_nocheck(*((type *)(dst)), (type *)(src), __kr_err); \
__put_user_nocheck(*((type *)(src)), (type *)(dst), __kr_err); \
if (unlikely(__kr_err)) \
goto err_label; \
} while (0)

Просмотреть файл

@ -1,4 +1,4 @@
/* SPDX-License-Identifier: GPL-2.0 */
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (C) 2013 Linaro Limited
* Author: AKASHI Takahiro <takahiro.akashi@linaro.org>

Просмотреть файл

@ -35,12 +35,17 @@ ENTRY(_start)
.word 0
#endif
.balign 8
#ifdef CONFIG_RISCV_M_MODE
/* Image load offset (0MB) from start of RAM for M-mode */
.dword 0
#else
#if __riscv_xlen == 64
/* Image load offset(2MB) from start of RAM */
.dword 0x200000
#else
/* Image load offset(4MB) from start of RAM */
.dword 0x400000
#endif
#endif
/* Effective size of kernel image */
.dword _end - _start

1
arch/riscv/kernel/vdso/.gitignore поставляемый
Просмотреть файл

@ -1,3 +1,4 @@
# SPDX-License-Identifier: GPL-2.0-only
vdso.lds
*.tmp
vdso-syms.S

Просмотреть файл

@ -43,19 +43,14 @@ $(obj)/vdso.o: $(obj)/vdso.so
SYSCFLAGS_vdso.so.dbg = $(c_flags)
$(obj)/vdso.so.dbg: $(src)/vdso.lds $(obj-vdso) FORCE
$(call if_changed,vdsold)
SYSCFLAGS_vdso.so.dbg = -shared -s -Wl,-soname=linux-vdso.so.1 \
-Wl,--build-id -Wl,--hash-style=both
# We also create a special relocatable object that should mirror the symbol
# table and layout of the linked DSO. With ld --just-symbols we can then
# refer to these symbols in the kernel code rather than hand-coded addresses.
SYSCFLAGS_vdso.so.dbg = -shared -s -Wl,-soname=linux-vdso.so.1 \
-Wl,--build-id=sha1 -Wl,--hash-style=both
$(obj)/vdso-dummy.o: $(src)/vdso.lds $(obj)/rt_sigreturn.o FORCE
$(call if_changed,vdsold)
LDFLAGS_vdso-syms.o := -r --just-symbols
$(obj)/vdso-syms.o: $(obj)/vdso-dummy.o FORCE
$(call if_changed,ld)
$(obj)/vdso-syms.S: $(obj)/vdso.so FORCE
$(call if_changed,so2s)
# strip rule for the .so file
$(obj)/%.so: OBJCOPYFLAGS := -S
@ -73,6 +68,11 @@ quiet_cmd_vdsold = VDSOLD $@
$(patsubst %, -G __vdso_%, $(vdso-syms)) $@.tmp $@ && \
rm $@.tmp
# Extracts symbol offsets from the VDSO, converting them into an assembly file
# that contains the same symbols at the same offsets.
quiet_cmd_so2s = SO2S $@
cmd_so2s = $(NM) -D $< | $(srctree)/$(src)/so2s.sh > $@
# install commands for the unstripped file
quiet_cmd_vdso_install = INSTALL $@
cmd_vdso_install = cp $(obj)/$@.dbg $(MODLIB)/vdso/$@

6
arch/riscv/kernel/vdso/so2s.sh Executable file
Просмотреть файл

@ -0,0 +1,6 @@
#!/bin/sh
# SPDX-License-Identifier: GPL-2.0+
# Copyright 2020 Palmer Dabbelt <palmerdabbelt@google.com>
sed 's!\([0-9a-f]*\) T \([a-z0-9_]*\)\(@@LINUX_4.15\)*!.global \2\n.set \2,0x\1!' \
| grep '^\.'

Просмотреть файл

@ -86,6 +86,7 @@ static inline void vmalloc_fault(struct pt_regs *regs, int code, unsigned long a
pmd_t *pmd, *pmd_k;
pte_t *pte_k;
int index;
unsigned long pfn;
/* User mode accesses just cause a SIGSEGV */
if (user_mode(regs))
@ -100,7 +101,8 @@ static inline void vmalloc_fault(struct pt_regs *regs, int code, unsigned long a
* of a task switch.
*/
index = pgd_index(addr);
pgd = (pgd_t *)pfn_to_virt(csr_read(CSR_SATP)) + index;
pfn = csr_read(CSR_SATP) & SATP_PPN;
pgd = (pgd_t *)pfn_to_virt(pfn) + index;
pgd_k = init_mm.pgd + index;
if (!pgd_present(*pgd_k)) {

Просмотреть файл

@ -154,9 +154,8 @@ disable:
void __init setup_bootmem(void)
{
phys_addr_t mem_size = 0;
phys_addr_t total_mem = 0;
phys_addr_t mem_start, start, end = 0;
phys_addr_t mem_start = 0;
phys_addr_t start, end = 0;
phys_addr_t vmlinux_end = __pa_symbol(&_end);
phys_addr_t vmlinux_start = __pa_symbol(&_start);
u64 i;
@ -164,21 +163,18 @@ void __init setup_bootmem(void)
/* Find the memory region containing the kernel */
for_each_mem_range(i, &start, &end) {
phys_addr_t size = end - start;
if (!total_mem)
if (!mem_start)
mem_start = start;
if (start <= vmlinux_start && vmlinux_end <= end)
BUG_ON(size == 0);
total_mem = total_mem + size;
}
/*
* Remove memblock from the end of usable area to the
* end of region
* The maximal physical memory size is -PAGE_OFFSET.
* Make sure that any memory beyond mem_start + (-PAGE_OFFSET) is removed
* as it is unusable by kernel.
*/
mem_size = min(total_mem, (phys_addr_t)-PAGE_OFFSET);
if (mem_start + mem_size < end)
memblock_remove(mem_start + mem_size,
end - mem_start - mem_size);
memblock_enforce_memory_limit(mem_start - PAGE_OFFSET);
/* Reserve from the start of the kernel to the end of the kernel */
memblock_reserve(vmlinux_start, vmlinux_end - vmlinux_start);
@ -297,6 +293,7 @@ pmd_t fixmap_pmd[PTRS_PER_PMD] __page_aligned_bss;
#define NUM_EARLY_PMDS (1UL + MAX_EARLY_MAPPING_SIZE / PGDIR_SIZE)
#endif
pmd_t early_pmd[PTRS_PER_PMD * NUM_EARLY_PMDS] __initdata __aligned(PAGE_SIZE);
pmd_t early_dtb_pmd[PTRS_PER_PMD] __initdata __aligned(PAGE_SIZE);
static pmd_t *__init get_pmd_virt_early(phys_addr_t pa)
{
@ -494,6 +491,18 @@ asmlinkage void __init setup_vm(uintptr_t dtb_pa)
load_pa + (va - PAGE_OFFSET),
map_size, PAGE_KERNEL_EXEC);
#ifndef __PAGETABLE_PMD_FOLDED
/* Setup early PMD for DTB */
create_pgd_mapping(early_pg_dir, DTB_EARLY_BASE_VA,
(uintptr_t)early_dtb_pmd, PGDIR_SIZE, PAGE_TABLE);
/* Create two consecutive PMD mappings for FDT early scan */
pa = dtb_pa & ~(PMD_SIZE - 1);
create_pmd_mapping(early_dtb_pmd, DTB_EARLY_BASE_VA,
pa, PMD_SIZE, PAGE_KERNEL);
create_pmd_mapping(early_dtb_pmd, DTB_EARLY_BASE_VA + PMD_SIZE,
pa + PMD_SIZE, PMD_SIZE, PAGE_KERNEL);
dtb_early_va = (void *)DTB_EARLY_BASE_VA + (dtb_pa & (PMD_SIZE - 1));
#else
/* Create two consecutive PGD mappings for FDT early scan */
pa = dtb_pa & ~(PGDIR_SIZE - 1);
create_pgd_mapping(early_pg_dir, DTB_EARLY_BASE_VA,
@ -501,6 +510,7 @@ asmlinkage void __init setup_vm(uintptr_t dtb_pa)
create_pgd_mapping(early_pg_dir, DTB_EARLY_BASE_VA + PGDIR_SIZE,
pa + PGDIR_SIZE, PGDIR_SIZE, PAGE_KERNEL);
dtb_early_va = (void *)DTB_EARLY_BASE_VA + (dtb_pa & (PGDIR_SIZE - 1));
#endif
dtb_early_pa = dtb_pa;
/*

Просмотреть файл

@ -93,9 +93,10 @@ CONFIG_CLEANCACHE=y
CONFIG_FRONTSWAP=y
CONFIG_CMA_DEBUG=y
CONFIG_CMA_DEBUGFS=y
CONFIG_CMA_AREAS=7
CONFIG_MEM_SOFT_DIRTY=y
CONFIG_ZSWAP=y
CONFIG_ZSMALLOC=m
CONFIG_ZSMALLOC=y
CONFIG_ZSMALLOC_STAT=y
CONFIG_DEFERRED_STRUCT_PAGE_INIT=y
CONFIG_IDLE_PAGE_TRACKING=y
@ -378,7 +379,6 @@ CONFIG_NETLINK_DIAG=m
CONFIG_CGROUP_NET_PRIO=y
CONFIG_BPF_JIT=y
CONFIG_NET_PKTGEN=m
# CONFIG_NET_DROP_MONITOR is not set
CONFIG_PCI=y
# CONFIG_PCIEASPM is not set
CONFIG_PCI_DEBUG=y
@ -386,7 +386,7 @@ CONFIG_HOTPLUG_PCI=y
CONFIG_HOTPLUG_PCI_S390=y
CONFIG_DEVTMPFS=y
CONFIG_CONNECTOR=y
CONFIG_ZRAM=m
CONFIG_ZRAM=y
CONFIG_BLK_DEV_LOOP=m
CONFIG_BLK_DEV_CRYPTOLOOP=m
CONFIG_BLK_DEV_DRBD=m
@ -689,6 +689,7 @@ CONFIG_CRYPTO_TEST=m
CONFIG_CRYPTO_DH=m
CONFIG_CRYPTO_ECDH=m
CONFIG_CRYPTO_ECRDSA=m
CONFIG_CRYPTO_SM2=m
CONFIG_CRYPTO_CURVE25519=m
CONFIG_CRYPTO_GCM=y
CONFIG_CRYPTO_CHACHA20POLY1305=m
@ -709,7 +710,6 @@ CONFIG_CRYPTO_RMD160=m
CONFIG_CRYPTO_RMD256=m
CONFIG_CRYPTO_RMD320=m
CONFIG_CRYPTO_SHA3=m
CONFIG_CRYPTO_SM3=m
CONFIG_CRYPTO_TGR192=m
CONFIG_CRYPTO_WP512=m
CONFIG_CRYPTO_AES_TI=m
@ -753,6 +753,7 @@ CONFIG_CRYPTO_DES_S390=m
CONFIG_CRYPTO_AES_S390=m
CONFIG_CRYPTO_GHASH_S390=m
CONFIG_CRYPTO_CRC32_S390=y
CONFIG_CRYPTO_DEV_VIRTIO=m
CONFIG_CORDIC=m
CONFIG_CRC32_SELFTEST=y
CONFIG_CRC4=m
@ -829,6 +830,7 @@ CONFIG_NETDEV_NOTIFIER_ERROR_INJECT=m
CONFIG_FAULT_INJECTION=y
CONFIG_FAILSLAB=y
CONFIG_FAIL_PAGE_ALLOC=y
CONFIG_FAULT_INJECTION_USERCOPY=y
CONFIG_FAIL_MAKE_REQUEST=y
CONFIG_FAIL_IO_TIMEOUT=y
CONFIG_FAIL_FUTEX=y

Просмотреть файл

@ -87,9 +87,10 @@ CONFIG_KSM=y
CONFIG_TRANSPARENT_HUGEPAGE=y
CONFIG_CLEANCACHE=y
CONFIG_FRONTSWAP=y
CONFIG_CMA_AREAS=7
CONFIG_MEM_SOFT_DIRTY=y
CONFIG_ZSWAP=y
CONFIG_ZSMALLOC=m
CONFIG_ZSMALLOC=y
CONFIG_ZSMALLOC_STAT=y
CONFIG_DEFERRED_STRUCT_PAGE_INIT=y
CONFIG_IDLE_PAGE_TRACKING=y
@ -371,7 +372,6 @@ CONFIG_NETLINK_DIAG=m
CONFIG_CGROUP_NET_PRIO=y
CONFIG_BPF_JIT=y
CONFIG_NET_PKTGEN=m
# CONFIG_NET_DROP_MONITOR is not set
CONFIG_PCI=y
# CONFIG_PCIEASPM is not set
CONFIG_HOTPLUG_PCI=y
@ -379,7 +379,7 @@ CONFIG_HOTPLUG_PCI_S390=y
CONFIG_UEVENT_HELPER=y
CONFIG_DEVTMPFS=y
CONFIG_CONNECTOR=y
CONFIG_ZRAM=m
CONFIG_ZRAM=y
CONFIG_BLK_DEV_LOOP=m
CONFIG_BLK_DEV_CRYPTOLOOP=m
CONFIG_BLK_DEV_DRBD=m
@ -680,6 +680,7 @@ CONFIG_CRYPTO_TEST=m
CONFIG_CRYPTO_DH=m
CONFIG_CRYPTO_ECDH=m
CONFIG_CRYPTO_ECRDSA=m
CONFIG_CRYPTO_SM2=m
CONFIG_CRYPTO_CURVE25519=m
CONFIG_CRYPTO_GCM=y
CONFIG_CRYPTO_CHACHA20POLY1305=m
@ -701,7 +702,6 @@ CONFIG_CRYPTO_RMD160=m
CONFIG_CRYPTO_RMD256=m
CONFIG_CRYPTO_RMD320=m
CONFIG_CRYPTO_SHA3=m
CONFIG_CRYPTO_SM3=m
CONFIG_CRYPTO_TGR192=m
CONFIG_CRYPTO_WP512=m
CONFIG_CRYPTO_AES_TI=m
@ -745,6 +745,7 @@ CONFIG_CRYPTO_DES_S390=m
CONFIG_CRYPTO_AES_S390=m
CONFIG_CRYPTO_GHASH_S390=m
CONFIG_CRYPTO_CRC32_S390=y
CONFIG_CRYPTO_DEV_VIRTIO=m
CONFIG_CORDIC=m
CONFIG_PRIME_NUMBERS=m
CONFIG_CRC4=m

Просмотреть файл

@ -17,11 +17,11 @@ CONFIG_HZ_100=y
# CONFIG_CHSC_SCH is not set
# CONFIG_SCM_BUS is not set
CONFIG_CRASH_DUMP=y
# CONFIG_SECCOMP is not set
# CONFIG_PFAULT is not set
# CONFIG_S390_HYPFS_FS is not set
# CONFIG_VIRTUALIZATION is not set
# CONFIG_S390_GUEST is not set
# CONFIG_SECCOMP is not set
CONFIG_PARTITION_ADVANCED=y
CONFIG_IBM_PARTITION=y
# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set

Просмотреть файл

@ -692,16 +692,6 @@ static inline int pud_large(pud_t pud)
return !!(pud_val(pud) & _REGION3_ENTRY_LARGE);
}
static inline unsigned long pud_pfn(pud_t pud)
{
unsigned long origin_mask;
origin_mask = _REGION_ENTRY_ORIGIN;
if (pud_large(pud))
origin_mask = _REGION3_ENTRY_ORIGIN_LARGE;
return (pud_val(pud) & origin_mask) >> PAGE_SHIFT;
}
#define pmd_leaf pmd_large
static inline int pmd_large(pmd_t pmd)
{
@ -747,16 +737,6 @@ static inline int pmd_none(pmd_t pmd)
return pmd_val(pmd) == _SEGMENT_ENTRY_EMPTY;
}
static inline unsigned long pmd_pfn(pmd_t pmd)
{
unsigned long origin_mask;
origin_mask = _SEGMENT_ENTRY_ORIGIN;
if (pmd_large(pmd))
origin_mask = _SEGMENT_ENTRY_ORIGIN_LARGE;
return (pmd_val(pmd) & origin_mask) >> PAGE_SHIFT;
}
#define pmd_write pmd_write
static inline int pmd_write(pmd_t pmd)
{
@ -1238,11 +1218,39 @@ static inline pte_t mk_pte(struct page *page, pgprot_t pgprot)
#define pud_index(address) (((address) >> PUD_SHIFT) & (PTRS_PER_PUD-1))
#define pmd_index(address) (((address) >> PMD_SHIFT) & (PTRS_PER_PMD-1))
#define pmd_deref(pmd) (pmd_val(pmd) & _SEGMENT_ENTRY_ORIGIN)
#define pud_deref(pud) (pud_val(pud) & _REGION_ENTRY_ORIGIN)
#define p4d_deref(pud) (p4d_val(pud) & _REGION_ENTRY_ORIGIN)
#define pgd_deref(pgd) (pgd_val(pgd) & _REGION_ENTRY_ORIGIN)
static inline unsigned long pmd_deref(pmd_t pmd)
{
unsigned long origin_mask;
origin_mask = _SEGMENT_ENTRY_ORIGIN;
if (pmd_large(pmd))
origin_mask = _SEGMENT_ENTRY_ORIGIN_LARGE;
return pmd_val(pmd) & origin_mask;
}
static inline unsigned long pmd_pfn(pmd_t pmd)
{
return pmd_deref(pmd) >> PAGE_SHIFT;
}
static inline unsigned long pud_deref(pud_t pud)
{
unsigned long origin_mask;
origin_mask = _REGION_ENTRY_ORIGIN;
if (pud_large(pud))
origin_mask = _REGION3_ENTRY_ORIGIN_LARGE;
return pud_val(pud) & origin_mask;
}
static inline unsigned long pud_pfn(pud_t pud)
{
return pud_deref(pud) >> PAGE_SHIFT;
}
/*
* The pgd_offset function *always* adds the index for the top-level
* region/segment table. This is done to get a sequence like the

Просмотреть файл

Просмотреть файл

@ -61,14 +61,6 @@ int main(void)
BLANK();
OFFSET(__VDSO_GETCPU_VAL, vdso_per_cpu_data, getcpu_val);
BLANK();
/* constants used by the vdso */
DEFINE(__CLOCK_REALTIME, CLOCK_REALTIME);
DEFINE(__CLOCK_MONOTONIC, CLOCK_MONOTONIC);
DEFINE(__CLOCK_REALTIME_COARSE, CLOCK_REALTIME_COARSE);
DEFINE(__CLOCK_MONOTONIC_COARSE, CLOCK_MONOTONIC_COARSE);
DEFINE(__CLOCK_THREAD_CPUTIME_ID, CLOCK_THREAD_CPUTIME_ID);
DEFINE(__CLOCK_COARSE_RES, LOW_RES_NSEC);
BLANK();
/* idle data offsets */
OFFSET(__CLOCK_IDLE_ENTER, s390_idle_data, clock_idle_enter);
OFFSET(__CLOCK_IDLE_EXIT, s390_idle_data, clock_idle_exit);

Просмотреть файл

@ -855,13 +855,14 @@ void __init smp_detect_cpus(void)
static void smp_init_secondary(void)
{
int cpu = smp_processor_id();
int cpu = raw_smp_processor_id();
S390_lowcore.last_update_clock = get_tod_clock();
restore_access_regs(S390_lowcore.access_regs_save_area);
set_cpu_flag(CIF_ASCE_PRIMARY);
set_cpu_flag(CIF_ASCE_SECONDARY);
cpu_init();
rcu_cpu_starting(cpu);
preempt_disable();
init_cpu_timer();
vtime_init();

Просмотреть файл

@ -101,6 +101,10 @@ static void __zpci_event_availability(struct zpci_ccdf_avail *ccdf)
if (ret)
break;
/* the PCI function will be scanned once function 0 appears */
if (!zdev->zbus->bus)
break;
pdev = pci_scan_single_device(zdev->zbus->bus, zdev->devfn);
if (!pdev)
break;

Просмотреть файл

@ -164,6 +164,7 @@ void initialize_identity_maps(void *rmode)
add_identity_map(cmdline, cmdline + COMMAND_LINE_SIZE);
/* Load the new page-table. */
sev_verify_cbit(top_level_pgt);
write_cr3(top_level_pgt);
}

Просмотреть файл

@ -68,6 +68,9 @@ SYM_FUNC_START(get_sev_encryption_bit)
SYM_FUNC_END(get_sev_encryption_bit)
.code64
#include "../../kernel/sev_verify_cbit.S"
SYM_FUNC_START(set_sev_encryption_mask)
#ifdef CONFIG_AMD_MEM_ENCRYPT
push %rbp
@ -81,6 +84,19 @@ SYM_FUNC_START(set_sev_encryption_mask)
bts %rax, sme_me_mask(%rip) /* Create the encryption mask */
/*
* Read MSR_AMD64_SEV again and store it to sev_status. Can't do this in
* get_sev_encryption_bit() because this function is 32-bit code and
* shared between 64-bit and 32-bit boot path.
*/
movl $MSR_AMD64_SEV, %ecx /* Read the SEV MSR */
rdmsr
/* Store MSR value in sev_status */
shlq $32, %rdx
orq %rdx, %rax
movq %rax, sev_status(%rip)
.Lno_sev_mask:
movq %rbp, %rsp /* Restore original stack pointer */
@ -96,5 +112,7 @@ SYM_FUNC_END(set_sev_encryption_mask)
#ifdef CONFIG_AMD_MEM_ENCRYPT
.balign 8
SYM_DATA(sme_me_mask, .quad 0)
SYM_DATA(sme_me_mask, .quad 0)
SYM_DATA(sev_status, .quad 0)
SYM_DATA(sev_check_data, .quad 0)
#endif

Просмотреть файл

@ -159,4 +159,6 @@ void boot_page_fault(void);
void boot_stage1_vc(void);
void boot_stage2_vc(void);
unsigned long sev_verify_cbit(unsigned long cr3);
#endif /* BOOT_COMPRESSED_MISC_H */

Просмотреть файл

@ -273,11 +273,15 @@ void __init hv_apic_init(void)
pr_info("Hyper-V: Using enlightened APIC (%s mode)",
x2apic_enabled() ? "x2apic" : "xapic");
/*
* With x2apic, architectural x2apic MSRs are equivalent to the
* respective synthetic MSRs, so there's no need to override
* the apic accessors. The only exception is
* hv_apic_eoi_write, because it benefits from lazy EOI when
* available, but it works for both xapic and x2apic modes.
* When in x2apic mode, don't use the Hyper-V specific APIC
* accessors since the field layout in the ICR register is
* different in x2apic mode. Furthermore, the architectural
* x2apic MSRs function just as well as the Hyper-V
* synthetic APIC MSRs, so there's no benefit in having
* separate Hyper-V accessors for x2apic mode. The only
* exception is hv_apic_eoi_write, because it benefits from
* lazy EOI when available, but the same accessor works for
* both xapic and x2apic because the field layout is the same.
*/
apic_set_eoi_write(hv_apic_eoi_write);
if (!x2apic_enabled()) {

Просмотреть файл

@ -290,6 +290,9 @@ static void __init uv_stringify(int len, char *to, char *from)
{
/* Relies on 'to' being NULL chars so result will be NULL terminated */
strncpy(to, from, len-1);
/* Trim trailing spaces */
(void)strim(to);
}
/* Find UV arch type entry in UVsystab */
@ -366,7 +369,7 @@ static int __init early_get_arch_type(void)
return ret;
}
static int __init uv_set_system_type(char *_oem_id)
static int __init uv_set_system_type(char *_oem_id, char *_oem_table_id)
{
/* Save OEM_ID passed from ACPI MADT */
uv_stringify(sizeof(oem_id), oem_id, _oem_id);
@ -386,13 +389,23 @@ static int __init uv_set_system_type(char *_oem_id)
/* (Not hubless), not a UV */
return 0;
/* Is UV hubless system */
uv_hubless_system = 0x01;
/* UV5 Hubless */
if (strncmp(uv_archtype, "NSGI5", 5) == 0)
uv_hubless_system |= 0x20;
/* UV4 Hubless: CH */
if (strncmp(uv_archtype, "NSGI4", 5) == 0)
uv_hubless_system = 0x11;
else if (strncmp(uv_archtype, "NSGI4", 5) == 0)
uv_hubless_system |= 0x10;
/* UV3 Hubless: UV300/MC990X w/o hub */
else
uv_hubless_system = 0x9;
uv_hubless_system |= 0x8;
/* Copy APIC type */
uv_stringify(sizeof(oem_table_id), oem_table_id, _oem_table_id);
pr_info("UV: OEM IDs %s/%s, SystemType %d, HUBLESS ID %x\n",
oem_id, oem_table_id, uv_system_type, uv_hubless_system);
@ -456,7 +469,7 @@ static int __init uv_acpi_madt_oem_check(char *_oem_id, char *_oem_table_id)
uv_cpu_info->p_uv_hub_info = &uv_hub_info_node0;
/* If not UV, return. */
if (likely(uv_set_system_type(_oem_id) == 0))
if (uv_set_system_type(_oem_id, _oem_table_id) == 0)
return 0;
/* Save and Decode OEM Table ID */

Просмотреть файл

@ -1254,6 +1254,14 @@ static int ssb_prctl_set(struct task_struct *task, unsigned long ctrl)
return 0;
}
static bool is_spec_ib_user_controlled(void)
{
return spectre_v2_user_ibpb == SPECTRE_V2_USER_PRCTL ||
spectre_v2_user_ibpb == SPECTRE_V2_USER_SECCOMP ||
spectre_v2_user_stibp == SPECTRE_V2_USER_PRCTL ||
spectre_v2_user_stibp == SPECTRE_V2_USER_SECCOMP;
}
static int ib_prctl_set(struct task_struct *task, unsigned long ctrl)
{
switch (ctrl) {
@ -1261,16 +1269,26 @@ static int ib_prctl_set(struct task_struct *task, unsigned long ctrl)
if (spectre_v2_user_ibpb == SPECTRE_V2_USER_NONE &&
spectre_v2_user_stibp == SPECTRE_V2_USER_NONE)
return 0;
/*
* Indirect branch speculation is always disabled in strict
* mode. It can neither be enabled if it was force-disabled
* by a previous prctl call.
* With strict mode for both IBPB and STIBP, the instruction
* code paths avoid checking this task flag and instead,
* unconditionally run the instruction. However, STIBP and IBPB
* are independent and either can be set to conditionally
* enabled regardless of the mode of the other.
*
* If either is set to conditional, allow the task flag to be
* updated, unless it was force-disabled by a previous prctl
* call. Currently, this is possible on an AMD CPU which has the
* feature X86_FEATURE_AMD_STIBP_ALWAYS_ON. In this case, if the
* kernel is booted with 'spectre_v2_user=seccomp', then
* spectre_v2_user_ibpb == SPECTRE_V2_USER_SECCOMP and
* spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT_PREFERRED.
*/
if (spectre_v2_user_ibpb == SPECTRE_V2_USER_STRICT ||
spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT ||
spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT_PREFERRED ||
if (!is_spec_ib_user_controlled() ||
task_spec_ib_force_disable(task))
return -EPERM;
task_clear_spec_ib_disable(task);
task_update_spec_tif(task);
break;
@ -1283,10 +1301,10 @@ static int ib_prctl_set(struct task_struct *task, unsigned long ctrl)
if (spectre_v2_user_ibpb == SPECTRE_V2_USER_NONE &&
spectre_v2_user_stibp == SPECTRE_V2_USER_NONE)
return -EPERM;
if (spectre_v2_user_ibpb == SPECTRE_V2_USER_STRICT ||
spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT ||
spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT_PREFERRED)
if (!is_spec_ib_user_controlled())
return 0;
task_set_spec_ib_disable(task);
if (ctrl == PR_SPEC_FORCE_DISABLE)
task_set_spec_ib_force_disable(task);
@ -1351,20 +1369,17 @@ static int ib_prctl_get(struct task_struct *task)
if (spectre_v2_user_ibpb == SPECTRE_V2_USER_NONE &&
spectre_v2_user_stibp == SPECTRE_V2_USER_NONE)
return PR_SPEC_ENABLE;
else if (spectre_v2_user_ibpb == SPECTRE_V2_USER_STRICT ||
spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT ||
spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT_PREFERRED)
return PR_SPEC_DISABLE;
else if (spectre_v2_user_ibpb == SPECTRE_V2_USER_PRCTL ||
spectre_v2_user_ibpb == SPECTRE_V2_USER_SECCOMP ||
spectre_v2_user_stibp == SPECTRE_V2_USER_PRCTL ||
spectre_v2_user_stibp == SPECTRE_V2_USER_SECCOMP) {
else if (is_spec_ib_user_controlled()) {
if (task_spec_ib_force_disable(task))
return PR_SPEC_PRCTL | PR_SPEC_FORCE_DISABLE;
if (task_spec_ib_disable(task))
return PR_SPEC_PRCTL | PR_SPEC_DISABLE;
return PR_SPEC_PRCTL | PR_SPEC_ENABLE;
} else
} else if (spectre_v2_user_ibpb == SPECTRE_V2_USER_STRICT ||
spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT ||
spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT_PREFERRED)
return PR_SPEC_DISABLE;
else
return PR_SPEC_NOT_AFFECTED;
}

Просмотреть файл

@ -161,6 +161,21 @@ SYM_INNER_LABEL(secondary_startup_64_no_verify, SYM_L_GLOBAL)
/* Setup early boot stage 4-/5-level pagetables. */
addq phys_base(%rip), %rax
/*
* For SEV guests: Verify that the C-bit is correct. A malicious
* hypervisor could lie about the C-bit position to perform a ROP
* attack on the guest by writing to the unencrypted stack and wait for
* the next RET instruction.
* %rsi carries pointer to realmode data and is callee-clobbered. Save
* and restore it.
*/
pushq %rsi
movq %rax, %rdi
call sev_verify_cbit
popq %rsi
/* Switch to new page-table */
movq %rax, %cr3
/* Ensure I am executing from virtual addresses */
@ -279,6 +294,7 @@ SYM_INNER_LABEL(secondary_startup_64_no_verify, SYM_L_GLOBAL)
SYM_CODE_END(secondary_startup_64)
#include "verify_cpu.S"
#include "sev_verify_cbit.S"
#ifdef CONFIG_HOTPLUG_CPU
/*

Просмотреть файл

@ -178,6 +178,32 @@ void __init do_vc_no_ghcb(struct pt_regs *regs, unsigned long exit_code)
goto fail;
regs->dx = val >> 32;
/*
* This is a VC handler and the #VC is only raised when SEV-ES is
* active, which means SEV must be active too. Do sanity checks on the
* CPUID results to make sure the hypervisor does not trick the kernel
* into the no-sev path. This could map sensitive data unencrypted and
* make it accessible to the hypervisor.
*
* In particular, check for:
* - Hypervisor CPUID bit
* - Availability of CPUID leaf 0x8000001f
* - SEV CPUID bit.
*
* The hypervisor might still report the wrong C-bit position, but this
* can't be checked here.
*/
if ((fn == 1 && !(regs->cx & BIT(31))))
/* Hypervisor bit */
goto fail;
else if (fn == 0x80000000 && (regs->ax < 0x8000001f))
/* SEV leaf check */
goto fail;
else if ((fn == 0x8000001f && !(regs->ax & BIT(1))))
/* SEV bit */
goto fail;
/* Skip over the CPUID two-byte opcode */
regs->ip += 2;

Просмотреть файл

@ -374,8 +374,8 @@ fault:
return ES_EXCEPTION;
}
static bool vc_slow_virt_to_phys(struct ghcb *ghcb, struct es_em_ctxt *ctxt,
unsigned long vaddr, phys_addr_t *paddr)
static enum es_result vc_slow_virt_to_phys(struct ghcb *ghcb, struct es_em_ctxt *ctxt,
unsigned long vaddr, phys_addr_t *paddr)
{
unsigned long va = (unsigned long)vaddr;
unsigned int level;
@ -394,15 +394,19 @@ static bool vc_slow_virt_to_phys(struct ghcb *ghcb, struct es_em_ctxt *ctxt,
if (user_mode(ctxt->regs))
ctxt->fi.error_code |= X86_PF_USER;
return false;
return ES_EXCEPTION;
}
if (WARN_ON_ONCE(pte_val(*pte) & _PAGE_ENC))
/* Emulated MMIO to/from encrypted memory not supported */
return ES_UNSUPPORTED;
pa = (phys_addr_t)pte_pfn(*pte) << PAGE_SHIFT;
pa |= va & ~page_level_mask(level);
*paddr = pa;
return true;
return ES_OK;
}
/* Include code shared with pre-decompression boot stage */
@ -731,6 +735,7 @@ static enum es_result vc_do_mmio(struct ghcb *ghcb, struct es_em_ctxt *ctxt,
{
u64 exit_code, exit_info_1, exit_info_2;
unsigned long ghcb_pa = __pa(ghcb);
enum es_result res;
phys_addr_t paddr;
void __user *ref;
@ -740,11 +745,12 @@ static enum es_result vc_do_mmio(struct ghcb *ghcb, struct es_em_ctxt *ctxt,
exit_code = read ? SVM_VMGEXIT_MMIO_READ : SVM_VMGEXIT_MMIO_WRITE;
if (!vc_slow_virt_to_phys(ghcb, ctxt, (unsigned long)ref, &paddr)) {
if (!read)
res = vc_slow_virt_to_phys(ghcb, ctxt, (unsigned long)ref, &paddr);
if (res != ES_OK) {
if (res == ES_EXCEPTION && !read)
ctxt->fi.error_code |= X86_PF_WRITE;
return ES_EXCEPTION;
return res;
}
exit_info_1 = paddr;

Просмотреть файл

@ -0,0 +1,89 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* sev_verify_cbit.S - Code for verification of the C-bit position reported
* by the Hypervisor when running with SEV enabled.
*
* Copyright (c) 2020 Joerg Roedel (jroedel@suse.de)
*
* sev_verify_cbit() is called before switching to a new long-mode page-table
* at boot.
*
* Verify that the C-bit position is correct by writing a random value to
* an encrypted memory location while on the current page-table. Then it
* switches to the new page-table to verify the memory content is still the
* same. After that it switches back to the current page-table and when the
* check succeeded it returns. If the check failed the code invalidates the
* stack pointer and goes into a hlt loop. The stack-pointer is invalidated to
* make sure no interrupt or exception can get the CPU out of the hlt loop.
*
* New page-table pointer is expected in %rdi (first parameter)
*
*/
SYM_FUNC_START(sev_verify_cbit)
#ifdef CONFIG_AMD_MEM_ENCRYPT
/* First check if a C-bit was detected */
movq sme_me_mask(%rip), %rsi
testq %rsi, %rsi
jz 3f
/* sme_me_mask != 0 could mean SME or SEV - Check also for SEV */
movq sev_status(%rip), %rsi
testq %rsi, %rsi
jz 3f
/* Save CR4 in %rsi */
movq %cr4, %rsi
/* Disable Global Pages */
movq %rsi, %rdx
andq $(~X86_CR4_PGE), %rdx
movq %rdx, %cr4
/*
* Verified that running under SEV - now get a random value using
* RDRAND. This instruction is mandatory when running as an SEV guest.
*
* Don't bail out of the loop if RDRAND returns errors. It is better to
* prevent forward progress than to work with a non-random value here.
*/
1: rdrand %rdx
jnc 1b
/* Store value to memory and keep it in %rdx */
movq %rdx, sev_check_data(%rip)
/* Backup current %cr3 value to restore it later */
movq %cr3, %rcx
/* Switch to new %cr3 - This might unmap the stack */
movq %rdi, %cr3
/*
* Compare value in %rdx with memory location. If C-bit is incorrect
* this would read the encrypted data and make the check fail.
*/
cmpq %rdx, sev_check_data(%rip)
/* Restore old %cr3 */
movq %rcx, %cr3
/* Restore previous CR4 */
movq %rsi, %cr4
/* Check CMPQ result */
je 3f
/*
* The check failed, prevent any forward progress to prevent ROP
* attacks, invalidate the stack and go into a hlt loop.
*/
xorq %rsp, %rsp
subq $0x1000, %rsp
2: hlt
jmp 2b
3:
#endif
/* Return page-table pointer */
movq %rdi, %rax
ret
SYM_FUNC_END(sev_verify_cbit)

Просмотреть файл

@ -16,8 +16,6 @@
* to a jmp to memcpy_erms which does the REP; MOVSB mem copy.
*/
.weak memcpy
/*
* memcpy - Copy a memory block.
*
@ -30,7 +28,7 @@
* rax original destination
*/
SYM_FUNC_START_ALIAS(__memcpy)
SYM_FUNC_START_LOCAL(memcpy)
SYM_FUNC_START_WEAK(memcpy)
ALTERNATIVE_2 "jmp memcpy_orig", "", X86_FEATURE_REP_GOOD, \
"jmp memcpy_erms", X86_FEATURE_ERMS

Просмотреть файл

@ -24,9 +24,7 @@
* Output:
* rax: dest
*/
.weak memmove
SYM_FUNC_START_ALIAS(memmove)
SYM_FUNC_START_WEAK(memmove)
SYM_FUNC_START(__memmove)
mov %rdi, %rax

Просмотреть файл

@ -6,8 +6,6 @@
#include <asm/alternative-asm.h>
#include <asm/export.h>
.weak memset
/*
* ISO C memset - set a memory block to a byte value. This function uses fast
* string to get better performance than the original function. The code is
@ -19,7 +17,7 @@
*
* rax original destination
*/
SYM_FUNC_START_ALIAS(memset)
SYM_FUNC_START_WEAK(memset)
SYM_FUNC_START(__memset)
/*
* Some CPUs support enhanced REP MOVSB/STOSB feature. It is recommended

Просмотреть файл

@ -39,6 +39,7 @@
*/
u64 sme_me_mask __section(".data") = 0;
u64 sev_status __section(".data") = 0;
u64 sev_check_data __section(".data") = 0;
EXPORT_SYMBOL(sme_me_mask);
DEFINE_STATIC_KEY_FALSE(sev_enable_key);
EXPORT_SYMBOL_GPL(sev_enable_key);

Просмотреть файл

@ -89,8 +89,8 @@ static void __init free_highpages(void)
/* set highmem page free */
for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE,
&range_start, &range_end, NULL) {
unsigned long start = PHYS_PFN(range_start);
unsigned long end = PHYS_PFN(range_end);
unsigned long start = PFN_UP(range_start);
unsigned long end = PFN_DOWN(range_end);
/* Ignore complete lowmem entries */
if (end <= max_low)

Просмотреть файл

@ -773,8 +773,7 @@ static void __device_link_del(struct kref *kref)
dev_dbg(link->consumer, "Dropping the link to %s\n",
dev_name(link->supplier));
if (link->flags & DL_FLAG_PM_RUNTIME)
pm_runtime_drop_link(link->consumer);
pm_runtime_drop_link(link);
list_del_rcu(&link->s_node);
list_del_rcu(&link->c_node);
@ -788,8 +787,7 @@ static void __device_link_del(struct kref *kref)
dev_info(link->consumer, "Dropping the link to %s\n",
dev_name(link->supplier));
if (link->flags & DL_FLAG_PM_RUNTIME)
pm_runtime_drop_link(link->consumer);
pm_runtime_drop_link(link);
list_del(&link->s_node);
list_del(&link->c_node);

Просмотреть файл

@ -1117,6 +1117,8 @@ static void __device_release_driver(struct device *dev, struct device *parent)
drv = dev->driver;
if (drv) {
pm_runtime_get_sync(dev);
while (device_links_busy(dev)) {
__device_driver_unlock(dev, parent);
@ -1128,13 +1130,12 @@ static void __device_release_driver(struct device *dev, struct device *parent)
* have released the driver successfully while this one
* was waiting, so check for that.
*/
if (dev->driver != drv)
if (dev->driver != drv) {
pm_runtime_put(dev);
return;
}
}
pm_runtime_get_sync(dev);
pm_runtime_clean_up_links(dev);
driver_sysfs_remove(dev);
if (dev->bus)

Просмотреть файл

@ -1642,42 +1642,6 @@ void pm_runtime_remove(struct device *dev)
pm_runtime_reinit(dev);
}
/**
* pm_runtime_clean_up_links - Prepare links to consumers for driver removal.
* @dev: Device whose driver is going to be removed.
*
* Check links from this device to any consumers and if any of them have active
* runtime PM references to the device, drop the usage counter of the device
* (as many times as needed).
*
* Links with the DL_FLAG_MANAGED flag unset are ignored.
*
* Since the device is guaranteed to be runtime-active at the point this is
* called, nothing else needs to be done here.
*
* Moreover, this is called after device_links_busy() has returned 'false', so
* the status of each link is guaranteed to be DL_STATE_SUPPLIER_UNBIND and
* therefore rpm_active can't be manipulated concurrently.
*/
void pm_runtime_clean_up_links(struct device *dev)
{
struct device_link *link;
int idx;
idx = device_links_read_lock();
list_for_each_entry_rcu(link, &dev->links.consumers, s_node,
device_links_read_lock_held()) {
if (!(link->flags & DL_FLAG_MANAGED))
continue;
while (refcount_dec_not_one(&link->rpm_active))
pm_runtime_put_noidle(dev);
}
device_links_read_unlock(idx);
}
/**
* pm_runtime_get_suppliers - Resume and reference-count supplier devices.
* @dev: Consumer device.
@ -1729,7 +1693,7 @@ void pm_runtime_new_link(struct device *dev)
spin_unlock_irq(&dev->power.lock);
}
void pm_runtime_drop_link(struct device *dev)
static void pm_runtime_drop_link_count(struct device *dev)
{
spin_lock_irq(&dev->power.lock);
WARN_ON(dev->power.links_count == 0);
@ -1737,6 +1701,25 @@ void pm_runtime_drop_link(struct device *dev)
spin_unlock_irq(&dev->power.lock);
}
/**
* pm_runtime_drop_link - Prepare for device link removal.
* @link: Device link going away.
*
* Drop the link count of the consumer end of @link and decrement the supplier
* device's runtime PM usage counter as many times as needed to drop all of the
* PM runtime reference to it from the consumer.
*/
void pm_runtime_drop_link(struct device_link *link)
{
if (!(link->flags & DL_FLAG_PM_RUNTIME))
return;
pm_runtime_drop_link_count(link->consumer);
while (refcount_dec_not_one(&link->rpm_active))
pm_runtime_put(link->supplier);
}
static bool pm_runtime_need_not_resume(struct device *dev)
{
return atomic_read(&dev->power.usage_count) <= 1 &&

Просмотреть файл

@ -47,7 +47,7 @@ struct nullb_device {
unsigned int nr_zones_closed;
struct blk_zone *zones;
sector_t zone_size_sects;
spinlock_t zone_dev_lock;
spinlock_t zone_lock;
unsigned long *zone_locks;
unsigned long size; /* device size in MB */

Просмотреть файл

@ -46,11 +46,20 @@ int null_init_zoned_dev(struct nullb_device *dev, struct request_queue *q)
if (!dev->zones)
return -ENOMEM;
spin_lock_init(&dev->zone_dev_lock);
dev->zone_locks = bitmap_zalloc(dev->nr_zones, GFP_KERNEL);
if (!dev->zone_locks) {
kvfree(dev->zones);
return -ENOMEM;
/*
* With memory backing, the zone_lock spinlock needs to be temporarily
* released to avoid scheduling in atomic context. To guarantee zone
* information protection, use a bitmap to lock zones with
* wait_on_bit_lock_io(). Sleeping on the lock is OK as memory backing
* implies that the queue is marked with BLK_MQ_F_BLOCKING.
*/
spin_lock_init(&dev->zone_lock);
if (dev->memory_backed) {
dev->zone_locks = bitmap_zalloc(dev->nr_zones, GFP_KERNEL);
if (!dev->zone_locks) {
kvfree(dev->zones);
return -ENOMEM;
}
}
if (dev->zone_nr_conv >= dev->nr_zones) {
@ -137,12 +146,17 @@ void null_free_zoned_dev(struct nullb_device *dev)
static inline void null_lock_zone(struct nullb_device *dev, unsigned int zno)
{
wait_on_bit_lock_io(dev->zone_locks, zno, TASK_UNINTERRUPTIBLE);
if (dev->memory_backed)
wait_on_bit_lock_io(dev->zone_locks, zno, TASK_UNINTERRUPTIBLE);
spin_lock_irq(&dev->zone_lock);
}
static inline void null_unlock_zone(struct nullb_device *dev, unsigned int zno)
{
clear_and_wake_up_bit(zno, dev->zone_locks);
spin_unlock_irq(&dev->zone_lock);
if (dev->memory_backed)
clear_and_wake_up_bit(zno, dev->zone_locks);
}
int null_report_zones(struct gendisk *disk, sector_t sector,
@ -322,7 +336,6 @@ static blk_status_t null_zone_write(struct nullb_cmd *cmd, sector_t sector,
return null_process_cmd(cmd, REQ_OP_WRITE, sector, nr_sectors);
null_lock_zone(dev, zno);
spin_lock(&dev->zone_dev_lock);
switch (zone->cond) {
case BLK_ZONE_COND_FULL:
@ -375,9 +388,17 @@ static blk_status_t null_zone_write(struct nullb_cmd *cmd, sector_t sector,
if (zone->cond != BLK_ZONE_COND_EXP_OPEN)
zone->cond = BLK_ZONE_COND_IMP_OPEN;
spin_unlock(&dev->zone_dev_lock);
/*
* Memory backing allocation may sleep: release the zone_lock spinlock
* to avoid scheduling in atomic context. Zone operation atomicity is
* still guaranteed through the zone_locks bitmap.
*/
if (dev->memory_backed)
spin_unlock_irq(&dev->zone_lock);
ret = null_process_cmd(cmd, REQ_OP_WRITE, sector, nr_sectors);
spin_lock(&dev->zone_dev_lock);
if (dev->memory_backed)
spin_lock_irq(&dev->zone_lock);
if (ret != BLK_STS_OK)
goto unlock;
@ -392,7 +413,6 @@ static blk_status_t null_zone_write(struct nullb_cmd *cmd, sector_t sector,
ret = BLK_STS_OK;
unlock:
spin_unlock(&dev->zone_dev_lock);
null_unlock_zone(dev, zno);
return ret;
@ -516,9 +536,7 @@ static blk_status_t null_zone_mgmt(struct nullb_cmd *cmd, enum req_opf op,
null_lock_zone(dev, i);
zone = &dev->zones[i];
if (zone->cond != BLK_ZONE_COND_EMPTY) {
spin_lock(&dev->zone_dev_lock);
null_reset_zone(dev, zone);
spin_unlock(&dev->zone_dev_lock);
trace_nullb_zone_op(cmd, i, zone->cond);
}
null_unlock_zone(dev, i);
@ -530,7 +548,6 @@ static blk_status_t null_zone_mgmt(struct nullb_cmd *cmd, enum req_opf op,
zone = &dev->zones[zone_no];
null_lock_zone(dev, zone_no);
spin_lock(&dev->zone_dev_lock);
switch (op) {
case REQ_OP_ZONE_RESET:
@ -550,8 +567,6 @@ static blk_status_t null_zone_mgmt(struct nullb_cmd *cmd, enum req_opf op,
break;
}
spin_unlock(&dev->zone_dev_lock);
if (ret == BLK_STS_OK)
trace_nullb_zone_op(cmd, zone_no, zone->cond);

Просмотреть файл

@ -41,6 +41,11 @@ int tpm_read_log_efi(struct tpm_chip *chip)
log_size = log_tbl->size;
memunmap(log_tbl);
if (!log_size) {
pr_warn("UEFI TPM log area empty\n");
return -EIO;
}
log_tbl = memremap(efi.tpm_log, sizeof(*log_tbl) + log_size,
MEMREMAP_WB);
if (!log_tbl) {

Просмотреть файл

@ -27,6 +27,7 @@
#include <linux/of.h>
#include <linux/of_device.h>
#include <linux/kernel.h>
#include <linux/dmi.h>
#include "tpm.h"
#include "tpm_tis_core.h"
@ -49,8 +50,8 @@ static inline struct tpm_tis_tcg_phy *to_tpm_tis_tcg_phy(struct tpm_tis_data *da
return container_of(data, struct tpm_tis_tcg_phy, priv);
}
static bool interrupts = true;
module_param(interrupts, bool, 0444);
static int interrupts = -1;
module_param(interrupts, int, 0444);
MODULE_PARM_DESC(interrupts, "Enable interrupts");
static bool itpm;
@ -63,6 +64,28 @@ module_param(force, bool, 0444);
MODULE_PARM_DESC(force, "Force device probe rather than using ACPI entry");
#endif
static int tpm_tis_disable_irq(const struct dmi_system_id *d)
{
if (interrupts == -1) {
pr_notice("tpm_tis: %s detected: disabling interrupts.\n", d->ident);
interrupts = 0;
}
return 0;
}
static const struct dmi_system_id tpm_tis_dmi_table[] = {
{
.callback = tpm_tis_disable_irq,
.ident = "ThinkPad T490s",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad T490s"),
},
},
{}
};
#if defined(CONFIG_PNP) && defined(CONFIG_ACPI)
static int has_hid(struct acpi_device *dev, const char *hid)
{
@ -192,6 +215,8 @@ static int tpm_tis_init(struct device *dev, struct tpm_info *tpm_info)
int irq = -1;
int rc;
dmi_check_system(tpm_tis_dmi_table);
rc = check_acpi_tpm2(dev);
if (rc)
return rc;

Просмотреть файл

@ -7,7 +7,7 @@
*
* This file add support for MD5 and SHA1/SHA224/SHA256/SHA384/SHA512.
*
* You could find the datasheet in Documentation/arm/sunxi/README
* You could find the datasheet in Documentation/arm/sunxi.rst
*/
#include <linux/dma-mapping.h>
#include <linux/pm_runtime.h>

Просмотреть файл

@ -7,7 +7,7 @@
*
* This file handle the PRNG
*
* You could find a link for the datasheet in Documentation/arm/sunxi/README
* You could find a link for the datasheet in Documentation/arm/sunxi.rst
*/
#include "sun8i-ce.h"
#include <linux/dma-mapping.h>

Просмотреть файл

@ -7,7 +7,7 @@
*
* This file handle the TRNG
*
* You could find a link for the datasheet in Documentation/arm/sunxi/README
* You could find a link for the datasheet in Documentation/arm/sunxi.rst
*/
#include "sun8i-ce.h"
#include <linux/dma-mapping.h>

Просмотреть файл

@ -55,7 +55,8 @@ amdgpu-y += amdgpu_device.o amdgpu_kms.o \
amdgpu_vf_error.o amdgpu_sched.o amdgpu_debugfs.o amdgpu_ids.o \
amdgpu_gmc.o amdgpu_mmhub.o amdgpu_xgmi.o amdgpu_csa.o amdgpu_ras.o amdgpu_vm_cpu.o \
amdgpu_vm_sdma.o amdgpu_discovery.o amdgpu_ras_eeprom.o amdgpu_nbio.o \
amdgpu_umc.o smu_v11_0_i2c.o amdgpu_fru_eeprom.o amdgpu_rap.o
amdgpu_umc.o smu_v11_0_i2c.o amdgpu_fru_eeprom.o amdgpu_rap.o \
amdgpu_fw_attestation.o
amdgpu-$(CONFIG_PERF_EVENTS) += amdgpu_pmu.o
@ -69,7 +70,8 @@ amdgpu-$(CONFIG_DRM_AMDGPU_SI)+= si.o gmc_v6_0.o gfx_v6_0.o si_ih.o si_dma.o dce
amdgpu-y += \
vi.o mxgpu_vi.o nbio_v6_1.o soc15.o emu_soc.o mxgpu_ai.o nbio_v7_0.o vega10_reg_init.o \
vega20_reg_init.o nbio_v7_4.o nbio_v2_3.o nv.o navi10_reg_init.o navi14_reg_init.o \
arct_reg_init.o navi12_reg_init.o mxgpu_nv.o sienna_cichlid_reg_init.o
arct_reg_init.o navi12_reg_init.o mxgpu_nv.o sienna_cichlid_reg_init.o vangogh_reg_init.o \
nbio_v7_2.o dimgrey_cavefish_reg_init.o
# add DF block
amdgpu-y += \
@ -81,7 +83,7 @@ amdgpu-y += \
gmc_v7_0.o \
gmc_v8_0.o \
gfxhub_v1_0.o mmhub_v1_0.o gmc_v9_0.o gfxhub_v1_1.o mmhub_v9_4.o \
gfxhub_v2_0.o mmhub_v2_0.o gmc_v10_0.o gfxhub_v2_1.o
gfxhub_v2_0.o mmhub_v2_0.o gmc_v10_0.o gfxhub_v2_1.o mmhub_v2_3.o
# add UMC block
amdgpu-y += \

Просмотреть файл

@ -623,6 +623,8 @@ struct amdgpu_asic_funcs {
bool (*supports_baco)(struct amdgpu_device *adev);
/* pre asic_init quirks */
void (*pre_asic_init)(struct amdgpu_device *adev);
/* enter/exit umd stable pstate */
int (*update_umd_stable_pstate)(struct amdgpu_device *adev, bool enter);
};
/*
@ -723,6 +725,45 @@ struct amd_powerplay {
const struct amd_pm_funcs *pp_funcs;
};
/* polaris10 kickers */
#define ASICID_IS_P20(did, rid) (((did == 0x67DF) && \
((rid == 0xE3) || \
(rid == 0xE4) || \
(rid == 0xE5) || \
(rid == 0xE7) || \
(rid == 0xEF))) || \
((did == 0x6FDF) && \
((rid == 0xE7) || \
(rid == 0xEF) || \
(rid == 0xFF))))
#define ASICID_IS_P30(did, rid) ((did == 0x67DF) && \
((rid == 0xE1) || \
(rid == 0xF7)))
/* polaris11 kickers */
#define ASICID_IS_P21(did, rid) (((did == 0x67EF) && \
((rid == 0xE0) || \
(rid == 0xE5))) || \
((did == 0x67FF) && \
((rid == 0xCF) || \
(rid == 0xEF) || \
(rid == 0xFF))))
#define ASICID_IS_P31(did, rid) ((did == 0x67EF) && \
((rid == 0xE2)))
/* polaris12 kickers */
#define ASICID_IS_P23(did, rid) (((did == 0x6987) && \
((rid == 0xC0) || \
(rid == 0xC1) || \
(rid == 0xC3) || \
(rid == 0xC7))) || \
((did == 0x6981) && \
((rid == 0x00) || \
(rid == 0x01) || \
(rid == 0x10))))
#define AMDGPU_RESET_MAGIC_NUM 64
#define AMDGPU_MAX_DF_PERFMONS 4
struct amdgpu_device {
@ -1165,6 +1206,8 @@ int emu_soc_asic_init(struct amdgpu_device *adev);
#define amdgpu_asic_get_pcie_replay_count(adev) ((adev)->asic_funcs->get_pcie_replay_count((adev)))
#define amdgpu_asic_supports_baco(adev) (adev)->asic_funcs->supports_baco((adev))
#define amdgpu_asic_pre_asic_init(adev) (adev)->asic_funcs->pre_asic_init((adev))
#define amdgpu_asic_update_umd_stable_pstate(adev, enter) \
((adev)->asic_funcs->update_umd_stable_pstate ? (adev)->asic_funcs->update_umd_stable_pstate((adev), (enter)) : 0)
#define amdgpu_inc_vram_lost(adev) atomic_inc(&((adev)->vram_lost_counter));
@ -1294,19 +1337,6 @@ bool amdgpu_device_load_pci_state(struct pci_dev *pdev);
#include "amdgpu_object.h"
/* used by df_v3_6.c and amdgpu_pmu.c */
#define AMDGPU_PMU_ATTR(_name, _object) \
static ssize_t \
_name##_show(struct device *dev, \
struct device_attribute *attr, \
char *page) \
{ \
BUILD_BUG_ON(sizeof(_object) >= PAGE_SIZE - 1); \
return sprintf(page, _object "\n"); \
} \
\
static struct device_attribute pmu_attr_##_name = __ATTR_RO(_name)
static inline bool amdgpu_is_tmz(struct amdgpu_device *adev)
{
return adev->gmc.tmz_enabled;

Просмотреть файл

@ -36,17 +36,17 @@
#include "acp_gfx_if.h"
#define ACP_TILE_ON_MASK 0x03
#define ACP_TILE_OFF_MASK 0x02
#define ACP_TILE_ON_RETAIN_REG_MASK 0x1f
#define ACP_TILE_OFF_RETAIN_REG_MASK 0x20
#define ACP_TILE_ON_MASK 0x03
#define ACP_TILE_OFF_MASK 0x02
#define ACP_TILE_ON_RETAIN_REG_MASK 0x1f
#define ACP_TILE_OFF_RETAIN_REG_MASK 0x20
#define ACP_TILE_P1_MASK 0x3e
#define ACP_TILE_P2_MASK 0x3d
#define ACP_TILE_DSP0_MASK 0x3b
#define ACP_TILE_DSP1_MASK 0x37
#define ACP_TILE_P1_MASK 0x3e
#define ACP_TILE_P2_MASK 0x3d
#define ACP_TILE_DSP0_MASK 0x3b
#define ACP_TILE_DSP1_MASK 0x37
#define ACP_TILE_DSP2_MASK 0x2f
#define ACP_TILE_DSP2_MASK 0x2f
#define ACP_DMA_REGS_END 0x146c0
#define ACP_I2S_PLAY_REGS_START 0x14840
@ -75,8 +75,8 @@
#define mmACP_CONTROL 0x5131
#define mmACP_STATUS 0x5133
#define mmACP_SOFT_RESET 0x5134
#define ACP_CONTROL__ClkEn_MASK 0x1
#define ACP_SOFT_RESET__SoftResetAud_MASK 0x100
#define ACP_CONTROL__ClkEn_MASK 0x1
#define ACP_SOFT_RESET__SoftResetAud_MASK 0x100
#define ACP_SOFT_RESET__SoftResetAudDone_MASK 0x1000000
#define ACP_CLOCK_EN_TIME_OUT_VALUE 0x000000FF
#define ACP_SOFT_RESET_DONE_TIME_OUT_VALUE 0x000000FF

Просмотреть файл

@ -390,23 +390,17 @@ void amdgpu_amdkfd_get_local_mem_info(struct kgd_dev *kgd,
struct kfd_local_mem_info *mem_info)
{
struct amdgpu_device *adev = (struct amdgpu_device *)kgd;
uint64_t address_mask = adev->dev->dma_mask ? ~*adev->dev->dma_mask :
~((1ULL << 32) - 1);
resource_size_t aper_limit = adev->gmc.aper_base + adev->gmc.aper_size;
memset(mem_info, 0, sizeof(*mem_info));
if (!(adev->gmc.aper_base & address_mask || aper_limit & address_mask)) {
mem_info->local_mem_size_public = adev->gmc.visible_vram_size;
mem_info->local_mem_size_private = adev->gmc.real_vram_size -
adev->gmc.visible_vram_size;
} else {
mem_info->local_mem_size_public = 0;
mem_info->local_mem_size_private = adev->gmc.real_vram_size;
}
mem_info->local_mem_size_public = adev->gmc.visible_vram_size;
mem_info->local_mem_size_private = adev->gmc.real_vram_size -
adev->gmc.visible_vram_size;
mem_info->vram_width = adev->gmc.vram_width;
pr_debug("Address base: %pap limit %pap public 0x%llx private 0x%llx\n",
&adev->gmc.aper_base, &aper_limit,
pr_debug("Address base: %pap public 0x%llx private 0x%llx\n",
&adev->gmc.aper_base,
mem_info->local_mem_size_public,
mem_info->local_mem_size_private);
@ -648,6 +642,13 @@ void amdgpu_amdkfd_set_compute_idle(struct kgd_dev *kgd, bool idle)
{
struct amdgpu_device *adev = (struct amdgpu_device *)kgd;
/* Temp workaround to fix the soft hang observed in certain compute
* applications if GFXOFF is enabled.
*/
if (adev->asic_type == CHIP_SIENNA_CICHLID) {
pr_debug("GFXOFF is %s\n", idle ? "enabled" : "disabled");
amdgpu_gfx_off_ctrl(adev, idle);
}
amdgpu_dpm_switch_power_profile(adev,
PP_SMC_POWER_PROFILE_COMPUTE,
!idle);

Просмотреть файл

@ -304,4 +304,5 @@ const struct kfd2kgd_calls arcturus_kfd2kgd = {
kgd_gfx_v9_get_atc_vmid_pasid_mapping_info,
.set_vm_context_page_table_base =
kgd_gfx_v9_set_vm_context_page_table_base,
.get_cu_occupancy = kgd_gfx_v9_get_cu_occupancy
};

Просмотреть файл

@ -799,7 +799,7 @@ static void get_wave_count(struct amdgpu_device *adev, int queue_idx,
*
* Reading registers referenced above involves programming GRBM appropriately
*/
static void kgd_gfx_v9_get_cu_occupancy(struct kgd_dev *kgd, int pasid,
void kgd_gfx_v9_get_cu_occupancy(struct kgd_dev *kgd, int pasid,
int *pasid_wave_cnt, int *max_waves_per_cu)
{
int qidx;

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше