MediaTek Helio X10 MT6795 - MT6331/6332 Regulators

Merge series from AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com>:

In an effort to give some love to the apparently forgotten MT6795 SoC,
I am upstreaming more components that are necessary to support platforms
powered by this one apart from a simple boot to serial console.

This series adds support for the regulators found in MT6331 and MT6332
main/companion PMICs.

Adding support to each driver in each subsystem is done in different
patch series as to avoid spamming uninteresting patches to maintainers.

Tested on a MT6795 Sony Xperia M5 (codename "Holly") smartphone.
This commit is contained in:
Mark Brown 2022-09-13 17:43:23 +01:00
Родитель 69a673c9e5 1cc5a52e87
Коммит ca9b8f0486
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: 24D68B725D5487D0
1043 изменённых файлов: 14863 добавлений и 6982 удалений

Просмотреть файл

@ -1,2 +1,4 @@
Alan Cox <alan@lxorguk.ukuu.org.uk>
Alan Cox <root@hraefn.swansea.linux.org.uk>
Christoph Hellwig <hch@lst.de>
Marc Gonzalez <marc.w.gonzalez@free.fr>

Просмотреть файл

@ -98,8 +98,7 @@ Christian Brauner <brauner@kernel.org> <christian.brauner@ubuntu.com>
Christian Marangi <ansuelsmth@gmail.com>
Christophe Ricard <christophe.ricard@gmail.com>
Christoph Hellwig <hch@lst.de>
Colin Ian King <colin.king@intel.com> <colin.king@canonical.com>
Colin Ian King <colin.king@intel.com> <colin.i.king@gmail.com>
Colin Ian King <colin.i.king@gmail.com> <colin.king@canonical.com>
Corey Minyard <minyard@acm.org>
Damian Hobson-Garcia <dhobsong@igel.co.jp>
Daniel Borkmann <daniel@iogearbox.net> <danborkmann@googlemail.com>
@ -150,6 +149,8 @@ Greg Kroah-Hartman <gregkh@suse.de>
Greg Kroah-Hartman <greg@kroah.com>
Greg Kurz <groug@kaod.org> <gkurz@linux.vnet.ibm.com>
Gregory CLEMENT <gregory.clement@bootlin.com> <gregory.clement@free-electrons.com>
Guilherme G. Piccoli <kernel@gpiccoli.net> <gpiccoli@linux.vnet.ibm.com>
Guilherme G. Piccoli <kernel@gpiccoli.net> <gpiccoli@canonical.com>
Guo Ren <guoren@kernel.org> <guoren@linux.alibaba.com>
Guo Ren <guoren@kernel.org> <ren_guo@c-sky.com>
Gustavo Padovan <gustavo@las.ic.unicamp.br>
@ -253,6 +254,7 @@ Linus Lüssing <linus.luessing@c0d3.blue> <linus.luessing@web.de>
Li Yang <leoyang.li@nxp.com> <leoli@freescale.com>
Li Yang <leoyang.li@nxp.com> <leo@zh-kernel.org>
Lorenzo Pieralisi <lpieralisi@kernel.org> <lorenzo.pieralisi@arm.com>
Luca Ceresoli <luca.ceresoli@bootlin.com> <luca@lucaceresoli.net>
Lukasz Luba <lukasz.luba@arm.com> <l.luba@partner.samsung.com>
Maciej W. Rozycki <macro@mips.com> <macro@imgtec.com>
Maciej W. Rozycki <macro@orcam.me.uk> <macro@linux-mips.org>

Просмотреть файл

@ -523,6 +523,7 @@ What: /sys/devices/system/cpu/vulnerabilities
/sys/devices/system/cpu/vulnerabilities/tsx_async_abort
/sys/devices/system/cpu/vulnerabilities/itlb_multihit
/sys/devices/system/cpu/vulnerabilities/mmio_stale_data
/sys/devices/system/cpu/vulnerabilities/retbleed
Date: January 2018
Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
Description: Information about CPU vulnerabilities

Просмотреть файл

@ -1,9 +1,9 @@
.. _readme:
Linux kernel release 5.x <http://kernel.org/>
Linux kernel release 6.x <http://kernel.org/>
=============================================
These are the release notes for Linux version 5. Read them carefully,
These are the release notes for Linux version 6. Read them carefully,
as they tell you what this is all about, explain how to install the
kernel, and what to do if something goes wrong.
@ -63,7 +63,7 @@ Installing the kernel source
directory where you have permissions (e.g. your home directory) and
unpack it::
xz -cd linux-5.x.tar.xz | tar xvf -
xz -cd linux-6.x.tar.xz | tar xvf -
Replace "X" with the version number of the latest kernel.
@ -72,12 +72,12 @@ Installing the kernel source
files. They should match the library, and not get messed up by
whatever the kernel-du-jour happens to be.
- You can also upgrade between 5.x releases by patching. Patches are
- You can also upgrade between 6.x releases by patching. Patches are
distributed in the xz format. To install by patching, get all the
newer patch files, enter the top level directory of the kernel source
(linux-5.x) and execute::
(linux-6.x) and execute::
xz -cd ../patch-5.x.xz | patch -p1
xz -cd ../patch-6.x.xz | patch -p1
Replace "x" for all versions bigger than the version "x" of your current
source tree, **in_order**, and you should be ok. You may want to remove
@ -85,13 +85,13 @@ Installing the kernel source
that there are no failed patches (some-file-name# or some-file-name.rej).
If there are, either you or I have made a mistake.
Unlike patches for the 5.x kernels, patches for the 5.x.y kernels
Unlike patches for the 6.x kernels, patches for the 6.x.y kernels
(also known as the -stable kernels) are not incremental but instead apply
directly to the base 5.x kernel. For example, if your base kernel is 5.0
and you want to apply the 5.0.3 patch, you must not first apply the 5.0.1
and 5.0.2 patches. Similarly, if you are running kernel version 5.0.2 and
want to jump to 5.0.3, you must first reverse the 5.0.2 patch (that is,
patch -R) **before** applying the 5.0.3 patch. You can read more on this in
directly to the base 6.x kernel. For example, if your base kernel is 6.0
and you want to apply the 6.0.3 patch, you must not first apply the 6.0.1
and 6.0.2 patches. Similarly, if you are running kernel version 6.0.2 and
want to jump to 6.0.3, you must first reverse the 6.0.2 patch (that is,
patch -R) **before** applying the 6.0.3 patch. You can read more on this in
:ref:`Documentation/process/applying-patches.rst <applying_patches>`.
Alternatively, the script patch-kernel can be used to automate this
@ -114,7 +114,7 @@ Installing the kernel source
Software requirements
---------------------
Compiling and running the 5.x kernels requires up-to-date
Compiling and running the 6.x kernels requires up-to-date
versions of various software packages. Consult
:ref:`Documentation/process/changes.rst <changes>` for the minimum version numbers
required and how to get updates for these packages. Beware that using
@ -132,12 +132,12 @@ Build directory for the kernel
place for the output files (including .config).
Example::
kernel source code: /usr/src/linux-5.x
kernel source code: /usr/src/linux-6.x
build directory: /home/name/build/kernel
To configure and build the kernel, use::
cd /usr/src/linux-5.x
cd /usr/src/linux-6.x
make O=/home/name/build/kernel menuconfig
make O=/home/name/build/kernel
sudo make O=/home/name/build/kernel modules_install install

Просмотреть файл

@ -230,6 +230,20 @@ The possible values in this file are:
* - 'Mitigation: Clear CPU buffers'
- The processor is vulnerable and the CPU buffer clearing mitigation is
enabled.
* - 'Unknown: No mitigations'
- The processor vulnerability status is unknown because it is
out of Servicing period. Mitigation is not attempted.
Definitions:
------------
Servicing period: The process of providing functional and security updates to
Intel processors or platforms, utilizing the Intel Platform Update (IPU)
process or other similar mechanisms.
End of Servicing Updates (ESU): ESU is the date at which Intel will no
longer provide Servicing, such as through IPU or other similar update
processes. ESU dates will typically be aligned to end of quarter.
If the processor is vulnerable then the following information is appended to
the above information:

Просмотреть файл

@ -5331,6 +5331,8 @@
rodata= [KNL]
on Mark read-only kernel memory as read-only (default).
off Leave read-only kernel memory writable for debugging.
full Mark read-only kernel memory and aliases as read-only
[arm64]
rockchip.usb_uart
Enable the uart passthrough on the designated usb port

Просмотреть файл

@ -50,10 +50,10 @@ For a short example, users can monitor the virtual address space of a given
workload as below. ::
# cd /sys/kernel/mm/damon/admin/
# echo 1 > kdamonds/nr && echo 1 > kdamonds/0/contexts/nr
# echo 1 > kdamonds/nr_kdamonds && echo 1 > kdamonds/0/contexts/nr_contexts
# echo vaddr > kdamonds/0/contexts/0/operations
# echo 1 > kdamonds/0/contexts/0/targets/nr
# echo $(pidof <workload>) > kdamonds/0/contexts/0/targets/0/pid
# echo 1 > kdamonds/0/contexts/0/targets/nr_targets
# echo $(pidof <workload>) > kdamonds/0/contexts/0/targets/0/pid_target
# echo on > kdamonds/0/state
Files Hierarchy
@ -366,12 +366,12 @@ memory rate becomes larger than 60%, or lower than 30%". ::
# echo 1 > kdamonds/0/contexts/0/schemes/nr_schemes
# cd kdamonds/0/contexts/0/schemes/0
# # set the basic access pattern and the action
# echo 4096 > access_patterns/sz/min
# echo 8192 > access_patterns/sz/max
# echo 0 > access_patterns/nr_accesses/min
# echo 5 > access_patterns/nr_accesses/max
# echo 10 > access_patterns/age/min
# echo 20 > access_patterns/age/max
# echo 4096 > access_pattern/sz/min
# echo 8192 > access_pattern/sz/max
# echo 0 > access_pattern/nr_accesses/min
# echo 5 > access_pattern/nr_accesses/max
# echo 10 > access_pattern/age/min
# echo 20 > access_pattern/age/max
# echo pageout > action
# # set quotas
# echo 10 > quotas/ms

Просмотреть файл

@ -271,7 +271,7 @@ poll cycle or the number of packets processed reaches netdev_budget.
netdev_max_backlog
------------------
Maximum number of packets, queued on the INPUT side, when the interface
Maximum number of packets, queued on the INPUT side, when the interface
receives packets faster than kernel can process them.
netdev_rss_key

Просмотреть файл

@ -242,44 +242,34 @@ HWCAP2_MTE3
by Documentation/arm64/memory-tagging-extension.rst.
HWCAP2_SME
Functionality implied by ID_AA64PFR1_EL1.SME == 0b0001, as described
by Documentation/arm64/sme.rst.
HWCAP2_SME_I16I64
Functionality implied by ID_AA64SMFR0_EL1.I16I64 == 0b1111.
HWCAP2_SME_F64F64
Functionality implied by ID_AA64SMFR0_EL1.F64F64 == 0b1.
HWCAP2_SME_I8I32
Functionality implied by ID_AA64SMFR0_EL1.I8I32 == 0b1111.
HWCAP2_SME_F16F32
Functionality implied by ID_AA64SMFR0_EL1.F16F32 == 0b1.
HWCAP2_SME_B16F32
Functionality implied by ID_AA64SMFR0_EL1.B16F32 == 0b1.
HWCAP2_SME_F32F32
Functionality implied by ID_AA64SMFR0_EL1.F32F32 == 0b1.
HWCAP2_SME_FA64
Functionality implied by ID_AA64SMFR0_EL1.FA64 == 0b1.
HWCAP2_WFXT
Functionality implied by ID_AA64ISAR2_EL1.WFXT == 0b0010.
HWCAP2_EBF16
Functionality implied by ID_AA64ISAR1_EL1.BF16 == 0b0010.
4. Unused AT_HWCAP bits

Просмотреть файл

@ -52,6 +52,8 @@ stable kernels.
| Allwinner | A64/R18 | UNKNOWN1 | SUN50I_ERRATUM_UNKNOWN1 |
+----------------+-----------------+-----------------+-----------------------------+
+----------------+-----------------+-----------------+-----------------------------+
| ARM | Cortex-A510 | #2457168 | ARM64_ERRATUM_2457168 |
+----------------+-----------------+-----------------+-----------------------------+
| ARM | Cortex-A510 | #2064142 | ARM64_ERRATUM_2064142 |
+----------------+-----------------+-----------------+-----------------------------+
| ARM | Cortex-A510 | #2038923 | ARM64_ERRATUM_2038923 |

Просмотреть файл

@ -58,13 +58,11 @@ Like with atomic_t, the rule of thumb is:
- RMW operations that have a return value are fully ordered.
- RMW operations that are conditional are unordered on FAILURE,
otherwise the above rules apply. In the case of test_and_{}_bit() operations,
if the bit in memory is unchanged by the operation then it is deemed to have
failed.
- RMW operations that are conditional are fully ordered.
Except for a successful test_and_set_bit_lock() which has ACQUIRE semantics and
clear_bit_unlock() which has RELEASE semantics.
Except for a successful test_and_set_bit_lock() which has ACQUIRE semantics,
clear_bit_unlock() which has RELEASE semantics and test_bit_acquire which has
ACQUIRE semantics.
Since a platform only has a single means of achieving atomic operations
the same barriers as for atomic_t are used, see atomic_t.txt.

Просмотреть файл

@ -23,3 +23,4 @@ Block
stat
switching-sched
writeback_cache_control
ublk

Просмотреть файл

@ -0,0 +1,253 @@
.. SPDX-License-Identifier: GPL-2.0
===========================================
Userspace block device driver (ublk driver)
===========================================
Overview
========
ublk is a generic framework for implementing block device logic from userspace.
The motivation behind it is that moving virtual block drivers into userspace,
such as loop, nbd and similar can be very helpful. It can help to implement
new virtual block device such as ublk-qcow2 (there are several attempts of
implementing qcow2 driver in kernel).
Userspace block devices are attractive because:
- They can be written many programming languages.
- They can use libraries that are not available in the kernel.
- They can be debugged with tools familiar to application developers.
- Crashes do not kernel panic the machine.
- Bugs are likely to have a lower security impact than bugs in kernel
code.
- They can be installed and updated independently of the kernel.
- They can be used to simulate block device easily with user specified
parameters/setting for test/debug purpose
ublk block device (``/dev/ublkb*``) is added by ublk driver. Any IO request
on the device will be forwarded to ublk userspace program. For convenience,
in this document, ``ublk server`` refers to generic ublk userspace
program. ``ublksrv`` [#userspace]_ is one of such implementation. It
provides ``libublksrv`` [#userspace_lib]_ library for developing specific
user block device conveniently, while also generic type block device is
included, such as loop and null. Richard W.M. Jones wrote userspace nbd device
``nbdublk`` [#userspace_nbdublk]_ based on ``libublksrv`` [#userspace_lib]_.
After the IO is handled by userspace, the result is committed back to the
driver, thus completing the request cycle. This way, any specific IO handling
logic is totally done by userspace, such as loop's IO handling, NBD's IO
communication, or qcow2's IO mapping.
``/dev/ublkb*`` is driven by blk-mq request-based driver. Each request is
assigned by one queue wide unique tag. ublk server assigns unique tag to each
IO too, which is 1:1 mapped with IO of ``/dev/ublkb*``.
Both the IO request forward and IO handling result committing are done via
``io_uring`` passthrough command; that is why ublk is also one io_uring based
block driver. It has been observed that using io_uring passthrough command can
give better IOPS than block IO; which is why ublk is one of high performance
implementation of userspace block device: not only IO request communication is
done by io_uring, but also the preferred IO handling in ublk server is io_uring
based approach too.
ublk provides control interface to set/get ublk block device parameters.
The interface is extendable and kabi compatible: basically any ublk request
queue's parameter or ublk generic feature parameters can be set/get via the
interface. Thus, ublk is generic userspace block device framework.
For example, it is easy to setup a ublk device with specified block
parameters from userspace.
Using ublk
==========
ublk requires userspace ublk server to handle real block device logic.
Below is example of using ``ublksrv`` to provide ublk-based loop device.
- add a device::
ublk add -t loop -f ublk-loop.img
- format with xfs, then use it::
mkfs.xfs /dev/ublkb0
mount /dev/ublkb0 /mnt
# do anything. all IOs are handled by io_uring
...
umount /mnt
- list the devices with their info::
ublk list
- delete the device::
ublk del -a
ublk del -n $ublk_dev_id
See usage details in README of ``ublksrv`` [#userspace_readme]_.
Design
======
Control plane
-------------
ublk driver provides global misc device node (``/dev/ublk-control``) for
managing and controlling ublk devices with help of several control commands:
- ``UBLK_CMD_ADD_DEV``
Add a ublk char device (``/dev/ublkc*``) which is talked with ublk server
WRT IO command communication. Basic device info is sent together with this
command. It sets UAPI structure of ``ublksrv_ctrl_dev_info``,
such as ``nr_hw_queues``, ``queue_depth``, and max IO request buffer size,
for which the info is negotiated with the driver and sent back to the server.
When this command is completed, the basic device info is immutable.
- ``UBLK_CMD_SET_PARAMS`` / ``UBLK_CMD_GET_PARAMS``
Set or get parameters of the device, which can be either generic feature
related, or request queue limit related, but can't be IO logic specific,
because the driver does not handle any IO logic. This command has to be
sent before sending ``UBLK_CMD_START_DEV``.
- ``UBLK_CMD_START_DEV``
After the server prepares userspace resources (such as creating per-queue
pthread & io_uring for handling ublk IO), this command is sent to the
driver for allocating & exposing ``/dev/ublkb*``. Parameters set via
``UBLK_CMD_SET_PARAMS`` are applied for creating the device.
- ``UBLK_CMD_STOP_DEV``
Halt IO on ``/dev/ublkb*`` and remove the device. When this command returns,
ublk server will release resources (such as destroying per-queue pthread &
io_uring).
- ``UBLK_CMD_DEL_DEV``
Remove ``/dev/ublkc*``. When this command returns, the allocated ublk device
number can be reused.
- ``UBLK_CMD_GET_QUEUE_AFFINITY``
When ``/dev/ublkc`` is added, the driver creates block layer tagset, so
that each queue's affinity info is available. The server sends
``UBLK_CMD_GET_QUEUE_AFFINITY`` to retrieve queue affinity info. It can
set up the per-queue context efficiently, such as bind affine CPUs with IO
pthread and try to allocate buffers in IO thread context.
- ``UBLK_CMD_GET_DEV_INFO``
For retrieving device info via ``ublksrv_ctrl_dev_info``. It is the server's
responsibility to save IO target specific info in userspace.
Data plane
----------
ublk server needs to create per-queue IO pthread & io_uring for handling IO
commands via io_uring passthrough. The per-queue IO pthread
focuses on IO handling and shouldn't handle any control & management
tasks.
The's IO is assigned by a unique tag, which is 1:1 mapping with IO
request of ``/dev/ublkb*``.
UAPI structure of ``ublksrv_io_desc`` is defined for describing each IO from
the driver. A fixed mmaped area (array) on ``/dev/ublkc*`` is provided for
exporting IO info to the server; such as IO offset, length, OP/flags and
buffer address. Each ``ublksrv_io_desc`` instance can be indexed via queue id
and IO tag directly.
The following IO commands are communicated via io_uring passthrough command,
and each command is only for forwarding the IO and committing the result
with specified IO tag in the command data:
- ``UBLK_IO_FETCH_REQ``
Sent from the server IO pthread for fetching future incoming IO requests
destined to ``/dev/ublkb*``. This command is sent only once from the server
IO pthread for ublk driver to setup IO forward environment.
- ``UBLK_IO_COMMIT_AND_FETCH_REQ``
When an IO request is destined to ``/dev/ublkb*``, the driver stores
the IO's ``ublksrv_io_desc`` to the specified mapped area; then the
previous received IO command of this IO tag (either ``UBLK_IO_FETCH_REQ``
or ``UBLK_IO_COMMIT_AND_FETCH_REQ)`` is completed, so the server gets
the IO notification via io_uring.
After the server handles the IO, its result is committed back to the
driver by sending ``UBLK_IO_COMMIT_AND_FETCH_REQ`` back. Once ublkdrv
received this command, it parses the result and complete the request to
``/dev/ublkb*``. In the meantime setup environment for fetching future
requests with the same IO tag. That is, ``UBLK_IO_COMMIT_AND_FETCH_REQ``
is reused for both fetching request and committing back IO result.
- ``UBLK_IO_NEED_GET_DATA``
With ``UBLK_F_NEED_GET_DATA`` enabled, the WRITE request will be firstly
issued to ublk server without data copy. Then, IO backend of ublk server
receives the request and it can allocate data buffer and embed its addr
inside this new io command. After the kernel driver gets the command,
data copy is done from request pages to this backend's buffer. Finally,
backend receives the request again with data to be written and it can
truly handle the request.
``UBLK_IO_NEED_GET_DATA`` adds one additional round-trip and one
io_uring_enter() syscall. Any user thinks that it may lower performance
should not enable UBLK_F_NEED_GET_DATA. ublk server pre-allocates IO
buffer for each IO by default. Any new project should try to use this
buffer to communicate with ublk driver. However, existing project may
break or not able to consume the new buffer interface; that's why this
command is added for backwards compatibility so that existing projects
can still consume existing buffers.
- data copy between ublk server IO buffer and ublk block IO request
The driver needs to copy the block IO request pages into the server buffer
(pages) first for WRITE before notifying the server of the coming IO, so
that the server can handle WRITE request.
When the server handles READ request and sends
``UBLK_IO_COMMIT_AND_FETCH_REQ`` to the server, ublkdrv needs to copy
the server buffer (pages) read to the IO request pages.
Future development
==================
Container-aware ublk deivice
----------------------------
ublk driver doesn't handle any IO logic. Its function is well defined
for now and very limited userspace interfaces are needed, which is also
well defined too. It is possible to make ublk devices container-aware block
devices in future as Stefan Hajnoczi suggested [#stefan]_, by removing
ADMIN privilege.
Zero copy
---------
Zero copy is a generic requirement for nbd, fuse or similar drivers. A
problem [#xiaoguang]_ Xiaoguang mentioned is that pages mapped to userspace
can't be remapped any more in kernel with existing mm interfaces. This can
occurs when destining direct IO to ``/dev/ublkb*``. Also, he reported that
big requests (IO size >= 256 KB) may benefit a lot from zero copy.
References
==========
.. [#userspace] https://github.com/ming1/ubdsrv
.. [#userspace_lib] https://github.com/ming1/ubdsrv/tree/master/lib
.. [#userspace_nbdublk] https://gitlab.com/rwmjones/libnbd/-/tree/nbdublk
.. [#userspace_readme] https://github.com/ming1/ubdsrv/blob/master/README
.. [#stefan] https://lore.kernel.org/linux-block/YoOr6jBfgVm8GvWg@stefanha-x1.localdomain/
.. [#xiaoguang] https://lore.kernel.org/linux-block/YoOr6jBfgVm8GvWg@stefanha-x1.localdomain/

Просмотреть файл

@ -86,6 +86,7 @@ if major >= 3:
"__used",
"__weak",
"noinline",
"__fix_address",
# include/linux/memblock.h:
"__init_memblock",

Просмотреть файл

@ -233,6 +233,7 @@ allOf:
- allwinner,sun8i-a83t-tcon-lcd
- allwinner,sun8i-v3s-tcon
- allwinner,sun9i-a80-tcon-lcd
- allwinner,sun20i-d1-tcon-lcd
then:
properties:
@ -252,6 +253,7 @@ allOf:
- allwinner,sun8i-a83t-tcon-tv
- allwinner,sun8i-r40-tcon-tv
- allwinner,sun9i-a80-tcon-tv
- allwinner,sun20i-d1-tcon-tv
then:
properties:
@ -278,6 +280,7 @@ allOf:
- allwinner,sun9i-a80-tcon-lcd
- allwinner,sun4i-a10-tcon
- allwinner,sun8i-a83t-tcon-lcd
- allwinner,sun20i-d1-tcon-lcd
then:
required:
@ -294,6 +297,7 @@ allOf:
- allwinner,sun8i-a23-tcon
- allwinner,sun8i-a33-tcon
- allwinner,sun8i-a83t-tcon-lcd
- allwinner,sun20i-d1-tcon-lcd
then:
properties:

Просмотреть файл

@ -24,8 +24,10 @@ properties:
interrupts:
minItems: 1
maxItems: 2
description:
Should be configured with type IRQ_TYPE_EDGE_RISING.
If two interrupts are provided, expected order is INT1 and INT2.
required:
- compatible

Просмотреть файл

@ -16,6 +16,7 @@ properties:
compatible:
enum:
- goodix,gt1151
- goodix,gt1158
- goodix,gt5663
- goodix,gt5688
- goodix,gt911

Просмотреть файл

@ -14,7 +14,7 @@ MAC node:
- mac-address : The 6-byte MAC address. If present, it is the default
MAC address.
- internal-phy : phandle to the internal PHY node
- phy-handle : phandle the external PHY node
- phy-handle : phandle to the external PHY node
Internal PHY node:
- compatible : Should be "qcom,fsm9900-emac-sgmii" or "qcom,qdf2432-emac-sgmii".

Просмотреть файл

@ -0,0 +1,273 @@
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/regulator/mediatek,mt6331-regulator.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: MT6331 Regulator from MediaTek Integrated
maintainers:
- AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com>
description: |
The MT6331 PMIC provides 6 BUCK and 21 LDO (Low Dropout) regulators
and nodes are named according to the regulator type:
buck-<name> and ldo-<name>.
MT6331 regulators node should be sub node of the MT6397 MFD node.
patternProperties:
"^buck-v(core2|io18|dvfs11|dvfs12|dvfs13|dvfs14)$":
type: object
$ref: "regulator.yaml#"
properties:
regulator-name:
pattern: "^v(core2|io18|dvfs11|dvfs12|dvfs13|dvfs14)$"
unevaluatedProperties: false
"^ldo-v(avdd32aud|auxa32)$":
type: object
$ref: "regulator.yaml#"
properties:
regulator-name:
pattern: "^v(avdd32aud|auxa32)$"
unevaluatedProperties: false
"^ldo-v(dig18|emc33|ibr|mc|mch|mipi|rtc|sram|usb10)$":
type: object
$ref: "regulator.yaml#"
properties:
regulator-name:
pattern: "^v(dig18|emc33|ibr|mc|mch|mipi|rtc|sram|usb10)$"
unevaluatedProperties: false
"^ldo-vcam(a|af|d|io)$":
type: object
$ref: "regulator.yaml#"
properties:
regulator-name:
pattern: "^vcam(a|af|d|io)$"
unevaluatedProperties: false
"^ldo-vtcxo[12]$":
type: object
$ref: "regulator.yaml#"
properties:
regulator-name:
pattern: "^vtcxo[12]$"
required:
- regulator-name
unevaluatedProperties: false
"^ldo-vgp[1234]$":
type: object
$ref: "regulator.yaml#"
properties:
regulator-name:
pattern: "^vgp[12]$"
required:
- regulator-name
unevaluatedProperties: false
additionalProperties: false
examples:
- |
pmic {
regulators {
mt6331_vdvfs11_reg: buck-vdvfs11 {
regulator-name = "vdvfs11";
regulator-min-microvolt = <700000>;
regulator-max-microvolt = <1493750>;
regulator-ramp-delay = <12500>;
regulator-enable-ramp-delay = <1>;
regulator-allowed-modes = <0 1>;
};
mt6331_vdvfs12_reg: buck-vdvfs12 {
regulator-name = "vdvfs12";
regulator-min-microvolt = <700000>;
regulator-max-microvolt = <1493750>;
regulator-ramp-delay = <12500>;
regulator-enable-ramp-delay = <1>;
regulator-allowed-modes = <0 1>;
};
mt6331_vdvfs13_reg: buck-vdvfs13 {
regulator-name = "vdvfs13";
regulator-min-microvolt = <700000>;
regulator-max-microvolt = <1493750>;
regulator-ramp-delay = <12500>;
regulator-enable-ramp-delay = <1>;
regulator-allowed-modes = <0 1>;
};
mt6331_vdvfs14_reg: buck-vdvfs14 {
regulator-name = "vdvfs14";
regulator-min-microvolt = <700000>;
regulator-max-microvolt = <1493750>;
regulator-ramp-delay = <12500>;
regulator-enable-ramp-delay = <1>;
regulator-allowed-modes = <0 1>;
};
mt6331_vcore2_reg: buck-vcore2 {
regulator-name = "vcore2";
regulator-min-microvolt = <700000>;
regulator-max-microvolt = <1493750>;
regulator-ramp-delay = <12500>;
regulator-enable-ramp-delay = <1>;
regulator-allowed-modes = <0 1>;
};
mt6331_vio18_reg: buck-vio18 {
regulator-name = "vio18";
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <1800000>;
regulator-ramp-delay = <12500>;
regulator-enable-ramp-delay = <0>;
regulator-allowed-modes = <0 1>;
};
mt6331_vtcxo1_reg: ldo-vtcxo1 {
regulator-name = "vtcxo1";
regulator-min-microvolt = <2800000>;
regulator-max-microvolt = <2800000>;
regulator-always-on;
regulator-boot-on;
};
mt6331_vtcxo2_reg: ldo-vtcxo2 {
regulator-name = "vtcxo2";
regulator-min-microvolt = <2800000>;
regulator-max-microvolt = <2800000>;
regulator-always-on;
regulator-boot-on;
};
mt6331_avdd32_aud_reg: ldo-avdd32aud {
regulator-name = "avdd32_aud";
regulator-min-microvolt = <2800000>;
regulator-max-microvolt = <3200000>;
};
mt6331_vauxa32_reg: ldo-vauxa32 {
regulator-name = "vauxa32";
regulator-min-microvolt = <2800000>;
regulator-max-microvolt = <3200000>;
};
mt6331_vcama_reg: ldo-vcama {
regulator-name = "vcama";
regulator-min-microvolt = <1500000>;
regulator-max-microvolt = <2800000>;
regulator-always-on;
};
mt6331_vio28_reg: ldo-vio28 {
regulator-name = "vio28";
regulator-min-microvolt = <2800000>;
regulator-max-microvolt = <2800000>;
regulator-always-on;
regulator-boot-on;
};
mt6331_vcamaf_reg: ldo-vcamaf {
regulator-name = "vcam_af";
regulator-min-microvolt = <1200000>;
regulator-max-microvolt = <3300000>;
};
mt6331_vmc_reg: ldo-vmc {
regulator-name = "vmc";
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <3300000>;
};
mt6331_vmch_reg: ldo-vmch {
regulator-name = "vmch";
regulator-min-microvolt = <3000000>;
regulator-max-microvolt = <3300000>;
};
mt6331_vemc33_reg: ldo-vemc33 {
regulator-name = "vemc33";
regulator-min-microvolt = <3300000>;
regulator-max-microvolt = <3300000>;
};
mt6331_vgp1_reg: ldo-vgp1 {
regulator-name = "vgp1";
regulator-min-microvolt = <1200000>;
regulator-max-microvolt = <3300000>;
};
mt6331_vsim1_reg: ldo-vsim1 {
regulator-name = "vsim1";
regulator-min-microvolt = <1700000>;
regulator-max-microvolt = <3100000>;
};
mt6331_vsim2_reg: ldo-vsim2 {
regulator-name = "vsim2";
regulator-min-microvolt = <1700000>;
regulator-max-microvolt = <3100000>;
};
mt6331_vmipi_reg: ldo-vmipi {
regulator-name = "vmipi";
regulator-min-microvolt = <1200000>;
regulator-max-microvolt = <3300000>;
};
mt6331_vibr_reg: ldo-vibr {
regulator-name = "vibr";
regulator-min-microvolt = <1200000>;
regulator-max-microvolt = <3300000>;
};
mt6331_vgp4_reg: ldo-vgp4 {
regulator-name = "vgp4";
regulator-min-microvolt = <1600000>;
regulator-max-microvolt = <2200000>;
};
mt6331_vcamd_reg: ldo-vcamd {
regulator-name = "vcamd";
regulator-min-microvolt = <900000>;
regulator-max-microvolt = <1500000>;
};
mt6331_vusb10_reg: ldo-vusb10 {
regulator-name = "vusb";
regulator-min-microvolt = <1000000>;
regulator-max-microvolt = <1300000>;
regulator-boot-on;
};
mt6331_vcamio_reg: ldo-vcamio {
regulator-name = "vcam_io";
regulator-min-microvolt = <1200000>;
regulator-max-microvolt = <1800000>;
};
mt6331_vsram_reg: ldo-vsram {
regulator-name = "vsram";
regulator-min-microvolt = <1012500>;
regulator-max-microvolt = <1012500>;
regulator-always-on;
regulator-boot-on;
};
mt6331_vgp2_reg: ldo-vgp2 {
regulator-name = "vgp2";
regulator-min-microvolt = <1100000>;
regulator-max-microvolt = <1500000>;
regulator-boot-on;
};
mt6331_vgp3_reg: ldo-vgp3 {
regulator-name = "vgp3";
regulator-min-microvolt = <1200000>;
regulator-max-microvolt = <1800000>;
};
mt6331_vrtc_reg: ldo-vrtc {
regulator-name = "vrtc";
regulator-min-microvolt = <2800000>;
regulator-max-microvolt = <2800000>;
regulator-always-on;
};
mt6331_vdig18_reg: ldo-vdig18 {
regulator-name = "dvdd18_dig";
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <1800000>;
};
};
};
...

Просмотреть файл

@ -0,0 +1,112 @@
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/regulator/mediatek,mt6332-regulator.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: MT6332 Regulator from MediaTek Integrated
maintainers:
- AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com>
description: |
The MT6332 Companion PMIC provides 6 BUCK and 4 LDO (Low Dropout)
regulators and nodes are named according to the regulator type:
buck-<name> and ldo-<name>.
MT6332 regulators node should be sub node of the MT6397 MFD node.
patternProperties:
"^buck-v(dram|dvfs2|pa|rf18a|rf18b|sbst)$":
type: object
$ref: "regulator.yaml#"
properties:
regulator-name:
pattern: "^v(dram|dvfs2|pa|rf18a|rf18b|sbst)$"
unevaluatedProperties: false
"^ldo-v(bif28|dig18|sram|usb33)$":
type: object
$ref: "regulator.yaml#"
properties:
regulator-name:
pattern: "^v(bif28|dig18|sram|usb33)$"
unevaluatedProperties: false
additionalProperties: false
examples:
- |
pmic {
regulators {
mt6332_vdram_reg: buck-vdram {
regulator-name = "vdram";
regulator-min-microvolt = <700000>;
regulator-max-microvolt = <1493750>;
regulator-ramp-delay = <12500>;
regulator-allowed-modes = <0 1>;
regulator-always-on;
};
mt6332_vdvfs2_reg: buck-vdvfs2 {
regulator-name = "vdvfs2";
regulator-min-microvolt = <700000>;
regulator-max-microvolt = <1312500>;
regulator-ramp-delay = <12500>;
regulator-enable-ramp-delay = <1>;
regulator-allowed-modes = <0 1>;
};
mt6332_vpa_reg: buck-vpa {
regulator-name = "vpa";
regulator-min-microvolt = <500000>;
regulator-max-microvolt = <3400000>;
};
mt6332_vrf18a_reg: buck-vrf18a {
regulator-name = "vrf18a";
regulator-min-microvolt = <1050000>;
regulator-max-microvolt = <2240625>;
regulator-allowed-modes = <0 1>;
};
mt6332_vrf18b_reg: buck-vrf18b {
regulator-name = "vrf18b";
regulator-min-microvolt = <1050000>;
regulator-max-microvolt = <2240625>;
regulator-allowed-modes = <0 1>;
};
mt6332_vsbst_reg: buck-vsbst {
regulator-name = "vsbst";
regulator-min-microvolt = <3500000>;
regulator-max-microvolt = <7468750>;
};
mt6332_vauxb32_reg: ldo-vauxb32 {
regulator-name = "vauxb32";
regulator-min-microvolt = <2800000>;
regulator-max-microvolt = <3200000>;
};
mt6332_vbif28_reg: ldo-vbif28 {
regulator-name = "vbif28";
regulator-min-microvolt = <2800000>;
regulator-max-microvolt = <2800000>;
};
mt6332_vdig18_reg: ldo-vdig18 {
regulator-name = "vdig18";
regulator-min-microvolt = <1200000>;
regulator-max-microvolt = <1800000>;
regulator-always-on;
};
mt6332_vsram_reg: ldo-vsram {
regulator-name = "vauxa32";
regulator-min-microvolt = <700000>;
regulator-max-microvolt = <1493750>;
regulator-always-on;
};
mt6332_vusb33_reg: ldo-vusb33 {
regulator-name = "vusb33";
regulator-min-microvolt = <3300000>;
regulator-max-microvolt = <3300000>;
};
};
};
...

Просмотреть файл

@ -10,7 +10,7 @@ description:
See spi-peripheral-props.yaml for more info.
maintainers:
- Pratyush Yadav <p.yadav@ti.com>
- Vaishnav Achath <vaishnav.a@ti.com>
properties:
# cdns,qspi-nor.yaml

Просмотреть файл

@ -7,7 +7,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
title: Cadence Quad SPI controller
maintainers:
- Pratyush Yadav <p.yadav@ti.com>
- Vaishnav Achath <vaishnav.a@ti.com>
allOf:
- $ref: spi-controller.yaml#

Просмотреть файл

@ -16,7 +16,7 @@ description:
their own separate schema that should be referenced from here.
maintainers:
- Pratyush Yadav <p.yadav@ti.com>
- Mark Brown <broonie@kernel.org>
properties:
reg:

Просмотреть файл

@ -42,7 +42,7 @@ properties:
description:
Address ranges of the thermal registers. If more then one range is given
the first one must be the common registers followed by each sensor
according the datasheet.
according to the datasheet.
minItems: 1
maxItems: 4

Просмотреть файл

@ -214,6 +214,7 @@ patternProperties:
- polling-delay
- polling-delay-passive
- thermal-sensors
- trips
additionalProperties: false

Просмотреть файл

@ -24,6 +24,7 @@ properties:
- mediatek,mt2712-mtu3
- mediatek,mt8173-mtu3
- mediatek,mt8183-mtu3
- mediatek,mt8188-mtu3
- mediatek,mt8192-mtu3
- mediatek,mt8195-mtu3
- const: mediatek,mtu3

Просмотреть файл

@ -33,6 +33,7 @@ properties:
- qcom,sm6115-dwc3
- qcom,sm6125-dwc3
- qcom,sm6350-dwc3
- qcom,sm6375-dwc3
- qcom,sm8150-dwc3
- qcom,sm8250-dwc3
- qcom,sm8350-dwc3
@ -108,12 +109,17 @@ properties:
HS/FS/LS modes are supported.
type: boolean
wakeup-source: true
# Required child node:
patternProperties:
"^usb@[0-9a-f]+$":
$ref: snps,dwc3.yaml#
properties:
wakeup-source: false
required:
- compatible
- reg

Просмотреть файл

@ -517,6 +517,7 @@ All I-Force devices are supported by the iforce module. This includes:
* AVB Mag Turbo Force
* AVB Top Shot Pegasus
* AVB Top Shot Force Feedback Racing Wheel
* Boeder Force Feedback Wheel
* Logitech WingMan Force
* Logitech WingMan Force Wheel
* Guillemot Race Leader Force Feedback

Просмотреть файл

@ -525,8 +525,8 @@ followed by a test macro::
If you need to expose a compiler capability to makefiles and/or C source files,
`CC_HAS_` is the recommended prefix for the config option::
config CC_HAS_ASM_GOTO
def_bool $(success,$(srctree)/scripts/gcc-goto.sh $(CC))
config CC_HAS_FOO
def_bool $(success,$(srctree)/scripts/cc-check-foo.sh $(CC))
Build as module only
~~~~~~~~~~~~~~~~~~~~

Просмотреть файл

@ -67,7 +67,7 @@ The ``netdevsim`` driver supports rate objects management, which includes:
- setting tx_share and tx_max rate values for any rate object type;
- setting parent node for any rate object type.
Rate nodes and it's parameters are exposed in ``netdevsim`` debugfs in RO mode.
Rate nodes and their parameters are exposed in ``netdevsim`` debugfs in RO mode.
For example created rate node with name ``some_group``:
.. code:: shell

Просмотреть файл

@ -8,7 +8,7 @@ Transmit path guidelines:
1) The ndo_start_xmit method must not return NETDEV_TX_BUSY under
any normal circumstances. It is considered a hard error unless
there is no way your device can tell ahead of time when it's
there is no way your device can tell ahead of time when its
transmit function will become busy.
Instead it must maintain the queue properly. For example,

Просмотреть файл

@ -1035,7 +1035,10 @@ tcp_limit_output_bytes - INTEGER
tcp_challenge_ack_limit - INTEGER
Limits number of Challenge ACK sent per second, as recommended
in RFC 5961 (Improving TCP's Robustness to Blind In-Window Attacks)
Default: 1000
Note that this per netns rate limit can allow some side channel
attacks and probably should not be enabled.
TCP stack implements per TCP socket limits anyway.
Default: INT_MAX (unlimited)
UDP variables
=============

Просмотреть файл

@ -11,7 +11,7 @@ Initial Release:
================
This is conceptually very similar to the macvlan driver with one major
exception of using L3 for mux-ing /demux-ing among slaves. This property makes
the master device share the L2 with it's slave devices. I have developed this
the master device share the L2 with its slave devices. I have developed this
driver in conjunction with network namespaces and not sure if there is use case
outside of it.

Просмотреть файл

@ -530,7 +530,7 @@ its tunnel close actions. For L2TPIP sockets, the socket's close
handler initiates the same tunnel close actions. All sessions are
first closed. Each session drops its tunnel ref. When the tunnel ref
reaches zero, the tunnel puts its socket ref. When the socket is
eventually destroyed, it's sk_destruct finally frees the L2TP tunnel
eventually destroyed, its sk_destruct finally frees the L2TP tunnel
context.
Sessions

Просмотреть файл

@ -159,7 +159,7 @@ tools such as iproute2.
The switchdev driver can know a particular port's position in the topology by
monitoring NETDEV_CHANGEUPPER notifications. For example, a port moved into a
bond will see it's upper master change. If that bond is moved into a bridge,
bond will see its upper master change. If that bond is moved into a bridge,
the bond's upper master will change. And so on. The driver will track such
movements to know what position a port is in in the overall topology by
registering for netdevice events and acting on NETDEV_CHANGEUPPER.

Просмотреть файл

@ -70,8 +70,16 @@
% Translations have Asian (CJK) characters which are only displayed if
% xeCJK is used
\usepackage{ifthen}
\newboolean{enablecjk}
\setboolean{enablecjk}{false}
\IfFontExistsTF{Noto Sans CJK SC}{
% Load xeCJK when CJK font is available
\IfFileExists{xeCJK.sty}{
\setboolean{enablecjk}{true}
}{}
}{}
\ifthenelse{\boolean{enablecjk}}{
% Load xeCJK when both the Noto Sans CJK font and xeCJK.sty are available.
\usepackage{xeCJK}
% Noto CJK fonts don't provide slant shape. [AutoFakeSlant] permits
% its emulation.
@ -196,7 +204,7 @@
% Inactivate CJK after tableofcontents
\apptocmd{\sphinxtableofcontents}{\kerneldocCJKoff}{}{}
\xeCJKsetup{CJKspace = true}% For inter-phrase space of Korean TOC
}{ % No CJK font found
}{ % Don't enable CJK
% Custom macros to on/off CJK and switch CJK fonts (Dummy)
\newcommand{\kerneldocCJKon}{}
\newcommand{\kerneldocCJKoff}{}
@ -204,14 +212,16 @@
%% and ignore the argument (#1) in their definitions, whole contents of
%% CJK chapters can be ignored.
\newcommand{\kerneldocBeginSC}[1]{%
%% Put a note on missing CJK fonts in place of zh_CN translation.
\begin{sphinxadmonition}{note}{Note on missing fonts:}
%% Put a note on missing CJK fonts or the xecjk package in place of
%% zh_CN translation.
\begin{sphinxadmonition}{note}{Note on missing fonts and a package:}
Translations of Simplified Chinese (zh\_CN), Traditional Chinese
(zh\_TW), Korean (ko\_KR), and Japanese (ja\_JP) were skipped
due to the lack of suitable font families.
due to the lack of suitable font families and/or the texlive-xecjk
package.
If you want them, please install ``Noto Sans CJK'' font families
by following instructions from
along with the texlive-xecjk package by following instructions from
\sphinxcode{./scripts/sphinx-pre-install}.
Having optional ``Noto Serif CJK'' font families will improve
the looks of those translations.

Просмотреть файл

@ -33,7 +33,7 @@ EXAMPLE
=======
In the example below, **rtla timerlat hist** is set to run for *10* minutes,
in the cpus *0-4*, *skipping zero* only lines. Moreover, **rtla timerlat
hist** will change the priority of the *timelat* threads to run under
hist** will change the priority of the *timerlat* threads to run under
*SCHED_DEADLINE* priority, with a *10us* runtime every *1ms* period. The
*1ms* period is also passed to the *timerlat* tracer::

Просмотреть файл

@ -35,8 +35,7 @@ Linux カーネルに変更を加えたいと思っている個人又は会社
てもらえやすくする提案を集めたものです。
コードを投稿する前に、Documentation/process/submit-checklist.rst の項目リストに目
を通してチェックしてください。もしあなたがドライバーを投稿しようとし
ているなら、Documentation/process/submitting-drivers.rst にも目を通してください。
を通してチェックしてください。
--------------------------------------------
セクション1 パッチの作り方と送り方

Просмотреть файл

@ -2178,7 +2178,7 @@ M: Jean-Marie Verdun <verdun@hpe.com>
M: Nick Hawkins <nick.hawkins@hpe.com>
S: Maintained
F: Documentation/devicetree/bindings/arm/hpe,gxp.yaml
F: Documentation/devicetree/bindings/spi/hpe,gxp-spi.yaml
F: Documentation/devicetree/bindings/spi/hpe,gxp-spifi.yaml
F: Documentation/devicetree/bindings/timer/hpe,gxp-timer.yaml
F: arch/arm/boot/dts/hpe-bmc*
F: arch/arm/boot/dts/hpe-gxp*
@ -3612,6 +3612,7 @@ F: include/linux/find.h
F: include/linux/nodemask.h
F: lib/bitmap.c
F: lib/cpumask.c
F: lib/cpumask_kunit.c
F: lib/find_bit.c
F: lib/find_bit_benchmark.c
F: lib/test_bitmap.c
@ -3679,6 +3680,7 @@ F: Documentation/networking/bonding.rst
F: drivers/net/bonding/
F: include/net/bond*
F: include/uapi/linux/if_bonding.h
F: tools/testing/selftests/drivers/net/bonding/
BOSCH SENSORTEC BMA400 ACCELEROMETER IIO DRIVER
M: Dan Robertson <dan@dlrobertson.com>
@ -5145,6 +5147,7 @@ T: git git://git.samba.org/sfrench/cifs-2.6.git
F: Documentation/admin-guide/cifs/
F: fs/cifs/
F: fs/smbfs_common/
F: include/uapi/linux/cifs
COMPACTPCI HOTPLUG CORE
M: Scott Murray <scott@spiteful.org>
@ -9780,7 +9783,7 @@ M: Christian Brauner <brauner@kernel.org>
M: Seth Forshee <sforshee@kernel.org>
L: linux-fsdevel@vger.kernel.org
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/brauner/linux.git
T: git://git.kernel.org/pub/scm/linux/kernel/git/vfs/idmapping.git
F: Documentation/filesystems/idmappings.rst
F: tools/testing/selftests/mount_setattr/
F: include/linux/mnt_idmapping.h
@ -10029,6 +10032,7 @@ F: Documentation/devicetree/bindings/input/
F: Documentation/devicetree/bindings/serio/
F: Documentation/input/
F: drivers/input/
F: include/dt-bindings/input/
F: include/linux/input.h
F: include/linux/input/
F: include/uapi/linux/input-event-codes.h
@ -10657,6 +10661,7 @@ T: git git://git.kernel.dk/linux-block
T: git git://git.kernel.dk/liburing
F: io_uring/
F: include/linux/io_uring.h
F: include/linux/io_uring_types.h
F: include/uapi/linux/io_uring.h
F: tools/io_uring/
@ -20761,6 +20766,7 @@ UBLK USERSPACE BLOCK DRIVER
M: Ming Lei <ming.lei@redhat.com>
L: linux-block@vger.kernel.org
S: Maintained
F: Documentation/block/ublk.rst
F: drivers/block/ublk_drv.c
F: include/uapi/linux/ublk_cmd.h
@ -22302,7 +22308,7 @@ M: Shubhrajyoti Datta <shubhrajyoti.datta@xilinx.com>
R: Srinivas Neeli <srinivas.neeli@xilinx.com>
R: Michal Simek <michal.simek@xilinx.com>
S: Maintained
F: Documentation/devicetree/bindings/gpio/gpio-xilinx.txt
F: Documentation/devicetree/bindings/gpio/xlnx,gpio-xilinx.yaml
F: Documentation/devicetree/bindings/gpio/gpio-zynq.yaml
F: drivers/gpio/gpio-xilinx.c
F: drivers/gpio/gpio-zynq.c

Просмотреть файл

@ -2,7 +2,7 @@
VERSION = 6
PATCHLEVEL = 0
SUBLEVEL = 0
EXTRAVERSION = -rc1
EXTRAVERSION = -rc4
NAME = Hurr durr I'ma ninja sloth
# *DOCUMENTATION*
@ -1113,13 +1113,11 @@ vmlinux-alldirs := $(sort $(vmlinux-dirs) Documentation \
$(patsubst %/,%,$(filter %/, $(core-) \
$(drivers-) $(libs-))))
subdir-modorder := $(addsuffix modules.order,$(filter %/, \
$(core-y) $(core-m) $(libs-y) $(libs-m) \
$(drivers-y) $(drivers-m)))
build-dirs := $(vmlinux-dirs)
clean-dirs := $(vmlinux-alldirs)
subdir-modorder := $(addsuffix /modules.order, $(build-dirs))
# Externally visible symbols (used by link-vmlinux.sh)
KBUILD_VMLINUX_OBJS := $(head-y) $(patsubst %/,%/built-in.a, $(core-y))
KBUILD_VMLINUX_OBJS += $(addsuffix built-in.a, $(filter %/, $(libs-y)))

Просмотреть файл

@ -53,7 +53,6 @@ config KPROBES
config JUMP_LABEL
bool "Optimize very unlikely/likely branches"
depends on HAVE_ARCH_JUMP_LABEL
depends on CC_HAS_ASM_GOTO
select OBJTOOL if HAVE_JUMP_LABEL_HACK
help
This option enables a transparent branch optimization that
@ -1361,7 +1360,7 @@ config HAVE_PREEMPT_DYNAMIC_CALL
config HAVE_PREEMPT_DYNAMIC_KEY
bool
depends on HAVE_ARCH_JUMP_LABEL && CC_HAS_ASM_GOTO
depends on HAVE_ARCH_JUMP_LABEL
select HAVE_PREEMPT_DYNAMIC
help
An architecture should select this if it can handle the preemption

Просмотреть файл

@ -283,11 +283,8 @@ arch___test_and_change_bit(unsigned long nr, volatile unsigned long *addr)
return (old & mask) != 0;
}
static __always_inline bool
arch_test_bit(unsigned long nr, const volatile unsigned long *addr)
{
return (1UL & (((const int *) addr)[nr >> 5] >> (nr & 31))) != 0UL;
}
#define arch_test_bit generic_test_bit
#define arch_test_bit_acquire generic_test_bit_acquire
/*
* ffz = Find First Zero in word. Undefined if no zero exists,

Просмотреть файл

@ -917,6 +917,23 @@ config ARM64_ERRATUM_1902691
If unsure, say Y.
config ARM64_ERRATUM_2457168
bool "Cortex-A510: 2457168: workaround for AMEVCNTR01 incrementing incorrectly"
depends on ARM64_AMU_EXTN
default y
help
This option adds the workaround for ARM Cortex-A510 erratum 2457168.
The AMU counter AMEVCNTR01 (constant counter) should increment at the same rate
as the system counter. On affected Cortex-A510 cores AMEVCNTR01 increments
incorrectly giving a significantly higher output value.
Work around this problem by returning 0 when reading the affected counter in
key locations that results in disabling all users of this counter. This effect
is the same to firmware disabling affected counters.
If unsure, say Y.
config CAVIUM_ERRATUM_22375
bool "Cavium erratum 22375, 24313"
default y

Просмотреть файл

@ -71,7 +71,7 @@ static __always_inline int icache_is_vpipt(void)
static inline u32 cache_type_cwg(void)
{
return (read_cpuid_cachetype() >> CTR_EL0_CWG_SHIFT) & CTR_EL0_CWG_MASK;
return SYS_FIELD_GET(CTR_EL0, CWG, read_cpuid_cachetype());
}
#define __read_mostly __section(".data..read_mostly")

Просмотреть файл

@ -153,7 +153,7 @@ struct vl_info {
#ifdef CONFIG_ARM64_SVE
extern void sve_alloc(struct task_struct *task);
extern void sve_alloc(struct task_struct *task, bool flush);
extern void fpsimd_release_task(struct task_struct *task);
extern void fpsimd_sync_to_sve(struct task_struct *task);
extern void fpsimd_force_sync_to_sve(struct task_struct *task);
@ -256,7 +256,7 @@ size_t sve_state_size(struct task_struct const *task);
#else /* ! CONFIG_ARM64_SVE */
static inline void sve_alloc(struct task_struct *task) { }
static inline void sve_alloc(struct task_struct *task, bool flush) { }
static inline void fpsimd_release_task(struct task_struct *task) { }
static inline void sve_sync_to_fpsimd(struct task_struct *task) { }
static inline void sve_sync_from_fpsimd_zeropad(struct task_struct *task) { }

Просмотреть файл

@ -64,28 +64,28 @@
#define EARLY_KASLR (0)
#endif
#define EARLY_ENTRIES(vstart, vend, shift) \
((((vend) - 1) >> (shift)) - ((vstart) >> (shift)) + 1 + EARLY_KASLR)
#define EARLY_ENTRIES(vstart, vend, shift, add) \
((((vend) - 1) >> (shift)) - ((vstart) >> (shift)) + 1 + add)
#define EARLY_PGDS(vstart, vend) (EARLY_ENTRIES(vstart, vend, PGDIR_SHIFT))
#define EARLY_PGDS(vstart, vend, add) (EARLY_ENTRIES(vstart, vend, PGDIR_SHIFT, add))
#if SWAPPER_PGTABLE_LEVELS > 3
#define EARLY_PUDS(vstart, vend) (EARLY_ENTRIES(vstart, vend, PUD_SHIFT))
#define EARLY_PUDS(vstart, vend, add) (EARLY_ENTRIES(vstart, vend, PUD_SHIFT, add))
#else
#define EARLY_PUDS(vstart, vend) (0)
#define EARLY_PUDS(vstart, vend, add) (0)
#endif
#if SWAPPER_PGTABLE_LEVELS > 2
#define EARLY_PMDS(vstart, vend) (EARLY_ENTRIES(vstart, vend, SWAPPER_TABLE_SHIFT))
#define EARLY_PMDS(vstart, vend, add) (EARLY_ENTRIES(vstart, vend, SWAPPER_TABLE_SHIFT, add))
#else
#define EARLY_PMDS(vstart, vend) (0)
#define EARLY_PMDS(vstart, vend, add) (0)
#endif
#define EARLY_PAGES(vstart, vend) ( 1 /* PGDIR page */ \
+ EARLY_PGDS((vstart), (vend)) /* each PGDIR needs a next level page table */ \
+ EARLY_PUDS((vstart), (vend)) /* each PUD needs a next level page table */ \
+ EARLY_PMDS((vstart), (vend))) /* each PMD needs a next level page table */
#define INIT_DIR_SIZE (PAGE_SIZE * EARLY_PAGES(KIMAGE_VADDR, _end))
#define EARLY_PAGES(vstart, vend, add) ( 1 /* PGDIR page */ \
+ EARLY_PGDS((vstart), (vend), add) /* each PGDIR needs a next level page table */ \
+ EARLY_PUDS((vstart), (vend), add) /* each PUD needs a next level page table */ \
+ EARLY_PMDS((vstart), (vend), add)) /* each PMD needs a next level page table */
#define INIT_DIR_SIZE (PAGE_SIZE * EARLY_PAGES(KIMAGE_VADDR, _end, EARLY_KASLR))
/* the initial ID map may need two extra pages if it needs to be extended */
#if VA_BITS < 48
@ -93,7 +93,7 @@
#else
#define INIT_IDMAP_DIR_SIZE (INIT_IDMAP_DIR_PAGES * PAGE_SIZE)
#endif
#define INIT_IDMAP_DIR_PAGES EARLY_PAGES(KIMAGE_VADDR, _end + MAX_FDT_SIZE + SWAPPER_BLOCK_SIZE)
#define INIT_IDMAP_DIR_PAGES EARLY_PAGES(KIMAGE_VADDR, _end + MAX_FDT_SIZE + SWAPPER_BLOCK_SIZE, 1)
/* Initial memory map size */
#if ARM64_KERNEL_USES_PMD_MAPS

Просмотреть файл

@ -929,6 +929,10 @@ bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu);
(system_supports_mte() && \
test_bit(KVM_ARCH_FLAG_MTE_ENABLED, &(kvm)->arch.flags))
#define kvm_supports_32bit_el0() \
(system_supports_32bit_el0() && \
!static_branch_unlikely(&arm64_mismatched_32bit_el0))
int kvm_trng_call(struct kvm_vcpu *vcpu);
#ifdef CONFIG_KVM
extern phys_addr_t hyp_mem_base;

Просмотреть файл

@ -3,6 +3,8 @@
#ifndef __ARM64_ASM_SETUP_H
#define __ARM64_ASM_SETUP_H
#include <linux/string.h>
#include <uapi/asm/setup.h>
void *get_early_fdt_ptr(void);
@ -14,4 +16,19 @@ void early_fdt_map(u64 dt_phys);
extern phys_addr_t __fdt_pointer __initdata;
extern u64 __cacheline_aligned boot_args[4];
static inline bool arch_parse_debug_rodata(char *arg)
{
extern bool rodata_enabled;
extern bool rodata_full;
if (arg && !strcmp(arg, "full")) {
rodata_enabled = true;
rodata_full = true;
return true;
}
return false;
}
#define arch_parse_debug_rodata arch_parse_debug_rodata
#endif

Просмотреть файл

@ -1116,6 +1116,7 @@
#else
#include <linux/bitfield.h>
#include <linux/build_bug.h>
#include <linux/types.h>
#include <asm/alternative.h>
@ -1209,8 +1210,6 @@
par; \
})
#endif
#define SYS_FIELD_GET(reg, field, val) \
FIELD_GET(reg##_##field##_MASK, val)
@ -1220,4 +1219,6 @@
#define SYS_FIELD_PREP_ENUM(reg, field, val) \
FIELD_PREP(reg##_##field##_MASK, reg##_##field##_##val)
#endif
#endif /* __ASM_SYSREG_H */

Просмотреть файл

@ -75,9 +75,11 @@ struct kvm_regs {
/* KVM_ARM_SET_DEVICE_ADDR ioctl id encoding */
#define KVM_ARM_DEVICE_TYPE_SHIFT 0
#define KVM_ARM_DEVICE_TYPE_MASK (0xffff << KVM_ARM_DEVICE_TYPE_SHIFT)
#define KVM_ARM_DEVICE_TYPE_MASK GENMASK(KVM_ARM_DEVICE_TYPE_SHIFT + 15, \
KVM_ARM_DEVICE_TYPE_SHIFT)
#define KVM_ARM_DEVICE_ID_SHIFT 16
#define KVM_ARM_DEVICE_ID_MASK (0xffff << KVM_ARM_DEVICE_ID_SHIFT)
#define KVM_ARM_DEVICE_ID_MASK GENMASK(KVM_ARM_DEVICE_ID_SHIFT + 15, \
KVM_ARM_DEVICE_ID_SHIFT)
/* Supported device IDs */
#define KVM_ARM_DEVICE_VGIC_V2 0

Просмотреть файл

@ -45,7 +45,8 @@ static void ci_leaf_init(struct cacheinfo *this_leaf,
int init_cache_level(unsigned int cpu)
{
unsigned int ctype, level, leaves, fw_level;
unsigned int ctype, level, leaves;
int fw_level;
struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
for (level = 1, leaves = 0; level <= MAX_CACHE_LEVEL; level++) {
@ -63,6 +64,9 @@ int init_cache_level(unsigned int cpu)
else
fw_level = acpi_find_last_cache_level(cpu);
if (fw_level < 0)
return fw_level;
if (level < fw_level) {
/*
* some external caches not specified in CLIDR_EL1

Просмотреть файл

@ -208,6 +208,8 @@ static const struct arm64_cpu_capabilities arm64_repeat_tlbi_list[] = {
#ifdef CONFIG_ARM64_ERRATUM_1286807
{
ERRATA_MIDR_RANGE(MIDR_CORTEX_A76, 0, 0, 3, 0),
},
{
/* Kryo4xx Gold (rcpe to rfpe) => (r0p0 to r3p0) */
ERRATA_MIDR_RANGE(MIDR_QCOM_KRYO_4XX_GOLD, 0xc, 0xe, 0xf, 0xe),
},
@ -654,6 +656,16 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
ERRATA_MIDR_REV_RANGE(MIDR_CORTEX_A510, 0, 0, 2)
},
#endif
#ifdef CONFIG_ARM64_ERRATUM_2457168
{
.desc = "ARM erratum 2457168",
.capability = ARM64_WORKAROUND_2457168,
.type = ARM64_CPUCAP_WEAK_LOCAL_CPU_FEATURE,
/* Cortex-A510 r0p0-r1p1 */
CAP_MIDR_RANGE(MIDR_CORTEX_A510, 0, 0, 1, 1)
},
#endif
#ifdef CONFIG_ARM64_ERRATUM_2038923
{
.desc = "ARM erratum 2038923",

Просмотреть файл

@ -1870,7 +1870,10 @@ static void cpu_amu_enable(struct arm64_cpu_capabilities const *cap)
pr_info("detected CPU%d: Activity Monitors Unit (AMU)\n",
smp_processor_id());
cpumask_set_cpu(smp_processor_id(), &amu_cpus);
update_freq_counters_refs();
/* 0 reference values signal broken/disabled counters */
if (!this_cpu_has_cap(ARM64_WORKAROUND_2457168))
update_freq_counters_refs();
}
}

Просмотреть файл

@ -502,7 +502,7 @@ tsk .req x28 // current thread_info
SYM_CODE_START(vectors)
kernel_ventry 1, t, 64, sync // Synchronous EL1t
kernel_ventry 1, t, 64, irq // IRQ EL1t
kernel_ventry 1, t, 64, fiq // FIQ EL1h
kernel_ventry 1, t, 64, fiq // FIQ EL1t
kernel_ventry 1, t, 64, error // Error EL1t
kernel_ventry 1, h, 64, sync // Synchronous EL1h

Просмотреть файл

@ -715,10 +715,12 @@ size_t sve_state_size(struct task_struct const *task)
* do_sve_acc() case, there is no ABI requirement to hide stale data
* written previously be task.
*/
void sve_alloc(struct task_struct *task)
void sve_alloc(struct task_struct *task, bool flush)
{
if (task->thread.sve_state) {
memset(task->thread.sve_state, 0, sve_state_size(task));
if (flush)
memset(task->thread.sve_state, 0,
sve_state_size(task));
return;
}
@ -1388,7 +1390,7 @@ void do_sve_acc(unsigned long esr, struct pt_regs *regs)
return;
}
sve_alloc(current);
sve_alloc(current, true);
if (!current->thread.sve_state) {
force_sig(SIGKILL);
return;
@ -1439,7 +1441,7 @@ void do_sme_acc(unsigned long esr, struct pt_regs *regs)
return;
}
sve_alloc(current);
sve_alloc(current, false);
sme_alloc(current);
if (!current->thread.sve_state || !current->thread.za_state) {
force_sig(SIGKILL);
@ -1460,17 +1462,6 @@ void do_sme_acc(unsigned long esr, struct pt_regs *regs)
fpsimd_bind_task_to_cpu();
}
/*
* If SVE was not already active initialise the SVE registers,
* any non-shared state between the streaming and regular SVE
* registers is architecturally guaranteed to be zeroed when
* we enter streaming mode. We do not need to initialize ZA
* since ZA must be disabled at this point and enabling ZA is
* architecturally defined to zero ZA.
*/
if (system_supports_sve() && !test_thread_flag(TIF_SVE))
sve_init_regs();
put_cpu_fpsimd_context();
}

Просмотреть файл

@ -371,7 +371,9 @@ SYM_FUNC_END(create_idmap)
SYM_FUNC_START_LOCAL(create_kernel_mapping)
adrp x0, init_pg_dir
mov_q x5, KIMAGE_VADDR // compile time __va(_text)
#ifdef CONFIG_RELOCATABLE
add x5, x5, x23 // add KASLR displacement
#endif
adrp x6, _end // runtime __pa(_end)
adrp x3, _text // runtime __pa(_text)
sub x6, x6, x3 // _end - _text

Просмотреть файл

@ -47,7 +47,7 @@ static int prepare_elf_headers(void **addr, unsigned long *sz)
u64 i;
phys_addr_t start, end;
nr_ranges = 1; /* for exclusion of crashkernel region */
nr_ranges = 2; /* for exclusion of crashkernel region */
for_each_mem_range(i, &start, &end)
nr_ranges++;

Просмотреть файл

@ -94,11 +94,9 @@ asmlinkage u64 kaslr_early_init(void *fdt)
seed = get_kaslr_seed(fdt);
if (!seed) {
#ifdef CONFIG_ARCH_RANDOM
if (!__early_cpu_has_rndr() ||
!__arm64_rndr((unsigned long *)&seed))
#endif
return 0;
if (!__early_cpu_has_rndr() ||
!__arm64_rndr((unsigned long *)&seed))
return 0;
}
/*

Просмотреть файл

@ -882,7 +882,7 @@ static int sve_set_common(struct task_struct *target,
* state and ensure there's storage.
*/
if (target->thread.svcr != old_svcr)
sve_alloc(target);
sve_alloc(target, true);
}
/* Registers: FPSIMD-only case */
@ -912,7 +912,7 @@ static int sve_set_common(struct task_struct *target,
goto out;
}
sve_alloc(target);
sve_alloc(target, true);
if (!target->thread.sve_state) {
ret = -ENOMEM;
clear_tsk_thread_flag(target, TIF_SVE);
@ -1082,7 +1082,7 @@ static int za_set(struct task_struct *target,
/* Ensure there is some SVE storage for streaming mode */
if (!target->thread.sve_state) {
sve_alloc(target);
sve_alloc(target, false);
if (!target->thread.sve_state) {
clear_thread_flag(TIF_SME);
ret = -ENOMEM;

Просмотреть файл

@ -91,7 +91,7 @@ static size_t sigframe_size(struct rt_sigframe_user_layout const *user)
* not taken into account. This limit is not a guarantee and is
* NOT ABI.
*/
#define SIGFRAME_MAXSZ SZ_64K
#define SIGFRAME_MAXSZ SZ_256K
static int __sigframe_alloc(struct rt_sigframe_user_layout *user,
unsigned long *offset, size_t size, bool extend)
@ -310,7 +310,7 @@ static int restore_sve_fpsimd_context(struct user_ctxs *user)
fpsimd_flush_task_state(current);
/* From now, fpsimd_thread_switch() won't touch thread.sve_state */
sve_alloc(current);
sve_alloc(current, true);
if (!current->thread.sve_state) {
clear_thread_flag(TIF_SVE);
return -ENOMEM;
@ -926,6 +926,16 @@ static void setup_return(struct pt_regs *regs, struct k_sigaction *ka,
/* Signal handlers are invoked with ZA and streaming mode disabled */
if (system_supports_sme()) {
/*
* If we were in streaming mode the saved register
* state was SVE but we will exit SM and use the
* FPSIMD register state - flush the saved FPSIMD
* register state in case it gets loaded.
*/
if (current->thread.svcr & SVCR_SM_MASK)
memset(&current->thread.uw.fpsimd_state, 0,
sizeof(current->thread.uw.fpsimd_state));
current->thread.svcr &= ~(SVCR_ZA_MASK |
SVCR_SM_MASK);
sme_smstop();

Просмотреть файл

@ -296,12 +296,25 @@ core_initcall(init_amu_fie);
static void cpu_read_corecnt(void *val)
{
/*
* A value of 0 can be returned if the current CPU does not support AMUs
* or if the counter is disabled for this CPU. A return value of 0 at
* counter read is properly handled as an error case by the users of the
* counter.
*/
*(u64 *)val = read_corecnt();
}
static void cpu_read_constcnt(void *val)
{
*(u64 *)val = read_constcnt();
/*
* Return 0 if the current CPU is affected by erratum 2457168. A value
* of 0 is also returned if the current CPU does not support AMUs or if
* the counter is disabled. A return value of 0 at counter read is
* properly handled as an error case by the users of the counter.
*/
*(u64 *)val = this_cpu_has_cap(ARM64_WORKAROUND_2457168) ?
0UL : read_constcnt();
}
static inline
@ -328,7 +341,22 @@ int counters_read_on_cpu(int cpu, smp_call_func_t func, u64 *val)
*/
bool cpc_ffh_supported(void)
{
return freq_counters_valid(get_cpu_with_amu_feat());
int cpu = get_cpu_with_amu_feat();
/*
* FFH is considered supported if there is at least one present CPU that
* supports AMUs. Using FFH to read core and reference counters for CPUs
* that do not support AMUs, have counters disabled or that are affected
* by errata, will result in a return value of 0.
*
* This is done to allow any enabled and valid counters to be read
* through FFH, knowing that potentially returning 0 as counter value is
* properly handled by the users of these counters.
*/
if ((cpu >= nr_cpu_ids) || !cpumask_test_cpu(cpu, cpu_present_mask))
return false;
return true;
}
int cpc_read_ffh(int cpu, struct cpc_reg *reg, u64 *val)

Просмотреть файл

@ -757,8 +757,7 @@ static bool vcpu_mode_is_bad_32bit(struct kvm_vcpu *vcpu)
if (likely(!vcpu_mode_is_32bit(vcpu)))
return false;
return !system_supports_32bit_el0() ||
static_branch_unlikely(&arm64_mismatched_32bit_el0);
return !kvm_supports_32bit_el0();
}
/**

Просмотреть файл

@ -242,7 +242,7 @@ static int set_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
u64 mode = (*(u64 *)valp) & PSR_AA32_MODE_MASK;
switch (mode) {
case PSR_AA32_MODE_USR:
if (!system_supports_32bit_el0())
if (!kvm_supports_32bit_el0())
return -EINVAL;
break;
case PSR_AA32_MODE_FIQ:

Просмотреть файл

@ -993,7 +993,7 @@ transparent_hugepage_adjust(struct kvm *kvm, struct kvm_memory_slot *memslot,
* THP doesn't start to split while we are adjusting the
* refcounts.
*
* We are sure this doesn't happen, because mmu_notifier_retry
* We are sure this doesn't happen, because mmu_invalidate_retry
* was successful and we are holding the mmu_lock, so if this
* THP is trying to split, it will be blocked in the mmu
* notifier before touching any of the pages, specifically
@ -1188,9 +1188,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
return ret;
}
mmu_seq = vcpu->kvm->mmu_notifier_seq;
mmu_seq = vcpu->kvm->mmu_invalidate_seq;
/*
* Ensure the read of mmu_notifier_seq happens before we call
* Ensure the read of mmu_invalidate_seq happens before we call
* gfn_to_pfn_prot (which calls get_user_pages), so that we don't risk
* the page we just got a reference to gets unmapped before we have a
* chance to grab the mmu_lock, which ensure that if the page gets
@ -1246,7 +1246,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
else
write_lock(&kvm->mmu_lock);
pgt = vcpu->arch.hw_mmu->pgt;
if (mmu_notifier_retry(kvm, mmu_seq))
if (mmu_invalidate_retry(kvm, mmu_seq))
goto out_unlock;
/*

Просмотреть файл

@ -652,7 +652,7 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
*/
val = ((pmcr & ~ARMV8_PMU_PMCR_MASK)
| (ARMV8_PMU_PMCR_MASK & 0xdecafbad)) & (~ARMV8_PMU_PMCR_E);
if (!system_supports_32bit_el0())
if (!kvm_supports_32bit_el0())
val |= ARMV8_PMU_PMCR_LC;
__vcpu_sys_reg(vcpu, r->reg) = val;
}
@ -701,7 +701,7 @@ static bool access_pmcr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
val = __vcpu_sys_reg(vcpu, PMCR_EL0);
val &= ~ARMV8_PMU_PMCR_MASK;
val |= p->regval & ARMV8_PMU_PMCR_MASK;
if (!system_supports_32bit_el0())
if (!kvm_supports_32bit_el0())
val |= ARMV8_PMU_PMCR_LC;
__vcpu_sys_reg(vcpu, PMCR_EL0) = val;
kvm_pmu_handle_pmcr(vcpu, val);

Просмотреть файл

@ -642,24 +642,6 @@ static void __init map_kernel_segment(pgd_t *pgdp, void *va_start, void *va_end,
vm_area_add_early(vma);
}
static int __init parse_rodata(char *arg)
{
int ret = strtobool(arg, &rodata_enabled);
if (!ret) {
rodata_full = false;
return 0;
}
/* permit 'full' in addition to boolean options */
if (strcmp(arg, "full"))
return -EINVAL;
rodata_enabled = true;
rodata_full = true;
return 0;
}
early_param("rodata", parse_rodata);
#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
static int __init map_entry_trampoline(void)
{

Просмотреть файл

@ -67,6 +67,7 @@ WORKAROUND_1902691
WORKAROUND_2038923
WORKAROUND_2064142
WORKAROUND_2077057
WORKAROUND_2457168
WORKAROUND_TRBE_OVERWRITE_FILL_MODE
WORKAROUND_TSB_FLUSH_FAILURE
WORKAROUND_TRBE_WRITE_OUT_OF_RANGE

Просмотреть файл

@ -179,6 +179,21 @@ arch_test_bit(unsigned long nr, const volatile unsigned long *addr)
return retval;
}
static __always_inline bool
arch_test_bit_acquire(unsigned long nr, const volatile unsigned long *addr)
{
int retval;
asm volatile(
"{P0 = tstbit(%1,%2); if (P0.new) %0 = #1; if (!P0.new) %0 = #0;}\n"
: "=&r" (retval)
: "r" (addr[BIT_WORD(nr)]), "r" (nr % BITS_PER_LONG)
: "p0", "memory"
);
return retval;
}
/*
* ffz - find first zero in word.
* @word: The word to search

Просмотреть файл

@ -331,11 +331,8 @@ arch___test_and_change_bit(unsigned long nr, volatile unsigned long *addr)
return (old & bit) != 0;
}
static __always_inline bool
arch_test_bit(unsigned long nr, const volatile unsigned long *addr)
{
return 1 & (((const volatile __u32 *) addr)[nr >> 5] >> (nr & 31));
}
#define arch_test_bit generic_test_bit
#define arch_test_bit_acquire generic_test_bit_acquire
/**
* ffz - find the first zero bit in a long word

Просмотреть файл

@ -39,6 +39,7 @@ config LOONGARCH
select ARCH_INLINE_SPIN_UNLOCK_BH if !PREEMPTION
select ARCH_INLINE_SPIN_UNLOCK_IRQ if !PREEMPTION
select ARCH_INLINE_SPIN_UNLOCK_IRQRESTORE if !PREEMPTION
select ARCH_KEEP_MEMBLOCK
select ARCH_MIGHT_HAVE_PC_PARPORT
select ARCH_MIGHT_HAVE_PC_SERIO
select ARCH_SPARSEMEM_ENABLE
@ -51,6 +52,7 @@ config LOONGARCH
select ARCH_USE_CMPXCHG_LOCKREF
select ARCH_USE_QUEUED_RWLOCKS
select ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT
select ARCH_WANT_LD_ORPHAN_WARN
select ARCH_WANTS_NO_INSTR
select BUILDTIME_TABLE_SORT
select COMMON_CLK
@ -111,6 +113,7 @@ config LOONGARCH
select PCI_ECAM if ACPI
select PCI_LOONGSON
select PCI_MSI_ARCH_FALLBACKS
select PCI_QUIRKS
select PERF_USE_VMALLOC
select RTC_LIB
select SMP

Просмотреть файл

@ -15,7 +15,7 @@ extern int acpi_pci_disabled;
extern int acpi_noirq;
#define acpi_os_ioremap acpi_os_ioremap
void __init __iomem *acpi_os_ioremap(acpi_physical_address phys, acpi_size size);
void __iomem *acpi_os_ioremap(acpi_physical_address phys, acpi_size size);
static inline void disable_acpi(void)
{

Просмотреть файл

@ -109,4 +109,20 @@ extern unsigned long vm_map_base;
*/
#define PHYSADDR(a) ((_ACAST64_(a)) & TO_PHYS_MASK)
/*
* On LoongArch, I/O ports mappring is following:
*
* | .... |
* |-----------------------|
* | pci io ports(16K~32M) |
* |-----------------------|
* | isa io ports(0 ~16K) |
* PCI_IOBASE ->|-----------------------|
* | .... |
*/
#define PCI_IOBASE ((void __iomem *)(vm_map_base + (2 * PAGE_SIZE)))
#define PCI_IOSIZE SZ_32M
#define ISA_IOSIZE SZ_16K
#define IO_SPACE_LIMIT (PCI_IOSIZE - 1)
#endif /* _ASM_ADDRSPACE_H */

Просмотреть файл

@ -5,8 +5,9 @@
#ifndef __ASM_CMPXCHG_H
#define __ASM_CMPXCHG_H
#include <asm/barrier.h>
#include <linux/bits.h>
#include <linux/build_bug.h>
#include <asm/barrier.h>
#define __xchg_asm(amswap_db, m, val) \
({ \
@ -21,10 +22,53 @@
__ret; \
})
static inline unsigned int __xchg_small(volatile void *ptr, unsigned int val,
unsigned int size)
{
unsigned int shift;
u32 old32, mask, temp;
volatile u32 *ptr32;
/* Mask value to the correct size. */
mask = GENMASK((size * BITS_PER_BYTE) - 1, 0);
val &= mask;
/*
* Calculate a shift & mask that correspond to the value we wish to
* exchange within the naturally aligned 4 byte integerthat includes
* it.
*/
shift = (unsigned long)ptr & 0x3;
shift *= BITS_PER_BYTE;
mask <<= shift;
/*
* Calculate a pointer to the naturally aligned 4 byte integer that
* includes our byte of interest, and load its value.
*/
ptr32 = (volatile u32 *)((unsigned long)ptr & ~0x3);
asm volatile (
"1: ll.w %0, %3 \n"
" andn %1, %0, %z4 \n"
" or %1, %1, %z5 \n"
" sc.w %1, %2 \n"
" beqz %1, 1b \n"
: "=&r" (old32), "=&r" (temp), "=ZC" (*ptr32)
: "ZC" (*ptr32), "Jr" (mask), "Jr" (val << shift)
: "memory");
return (old32 & mask) >> shift;
}
static inline unsigned long __xchg(volatile void *ptr, unsigned long x,
int size)
{
switch (size) {
case 1:
case 2:
return __xchg_small(ptr, x, size);
case 4:
return __xchg_asm("amswap_db.w", (volatile u32 *)ptr, (u32)x);
@ -67,10 +111,62 @@ static inline unsigned long __xchg(volatile void *ptr, unsigned long x,
__ret; \
})
static inline unsigned int __cmpxchg_small(volatile void *ptr, unsigned int old,
unsigned int new, unsigned int size)
{
unsigned int shift;
u32 old32, mask, temp;
volatile u32 *ptr32;
/* Mask inputs to the correct size. */
mask = GENMASK((size * BITS_PER_BYTE) - 1, 0);
old &= mask;
new &= mask;
/*
* Calculate a shift & mask that correspond to the value we wish to
* compare & exchange within the naturally aligned 4 byte integer
* that includes it.
*/
shift = (unsigned long)ptr & 0x3;
shift *= BITS_PER_BYTE;
old <<= shift;
new <<= shift;
mask <<= shift;
/*
* Calculate a pointer to the naturally aligned 4 byte integer that
* includes our byte of interest, and load its value.
*/
ptr32 = (volatile u32 *)((unsigned long)ptr & ~0x3);
asm volatile (
"1: ll.w %0, %3 \n"
" and %1, %0, %z4 \n"
" bne %1, %z5, 2f \n"
" andn %1, %0, %z4 \n"
" or %1, %1, %z6 \n"
" sc.w %1, %2 \n"
" beqz %1, 1b \n"
" b 3f \n"
"2: \n"
__WEAK_LLSC_MB
"3: \n"
: "=&r" (old32), "=&r" (temp), "=ZC" (*ptr32)
: "ZC" (*ptr32), "Jr" (mask), "Jr" (old), "Jr" (new)
: "memory");
return (old32 & mask) >> shift;
}
static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old,
unsigned long new, unsigned int size)
{
switch (size) {
case 1:
case 2:
return __cmpxchg_small(ptr, old, new, size);
case 4:
return __cmpxchg_asm("ll.w", "sc.w", (volatile u32 *)ptr,
(u32)old, new);

Просмотреть файл

@ -7,34 +7,15 @@
#define ARCH_HAS_IOREMAP_WC
#include <linux/compiler.h>
#include <linux/kernel.h>
#include <linux/types.h>
#include <asm/addrspace.h>
#include <asm/bug.h>
#include <asm/byteorder.h>
#include <asm/cpu.h>
#include <asm/page.h>
#include <asm/pgtable-bits.h>
#include <asm/string.h>
/*
* On LoongArch, I/O ports mappring is following:
*
* | .... |
* |-----------------------|
* | pci io ports(64K~32M) |
* |-----------------------|
* | isa io ports(0 ~16K) |
* PCI_IOBASE ->|-----------------------|
* | .... |
*/
#define PCI_IOBASE ((void __iomem *)(vm_map_base + (2 * PAGE_SIZE)))
#define PCI_IOSIZE SZ_32M
#define ISA_IOSIZE SZ_16K
#define IO_SPACE_LIMIT (PCI_IOSIZE - 1)
/*
* Change "struct page" to physical address.
*/

Просмотреть файл

@ -81,7 +81,6 @@ extern struct acpi_vector_group msi_group[MAX_IO_PICS];
#define GSI_MIN_PCH_IRQ LOONGSON_PCH_IRQ_BASE
#define GSI_MAX_PCH_IRQ (LOONGSON_PCH_IRQ_BASE + 256 - 1)
extern int find_pch_pic(u32 gsi);
struct acpi_madt_lio_pic;
struct acpi_madt_eio_pic;
struct acpi_madt_ht_pic;

Просмотреть файл

@ -95,7 +95,7 @@ static inline int pfn_valid(unsigned long pfn)
#endif
#define virt_to_pfn(kaddr) PFN_DOWN(virt_to_phys((void *)(kaddr)))
#define virt_to_pfn(kaddr) PFN_DOWN(PHYSADDR(kaddr))
#define virt_to_page(kaddr) pfn_to_page(virt_to_pfn(kaddr))
extern int __virt_addr_valid(volatile void *kaddr);

Просмотреть файл

@ -123,6 +123,10 @@ static inline unsigned long __percpu_xchg(void *ptr, unsigned long val,
int size)
{
switch (size) {
case 1:
case 2:
return __xchg_small((volatile void *)ptr, val, size);
case 4:
return __xchg_asm("amswap.w", (volatile u32 *)ptr, (u32)val);
@ -204,9 +208,13 @@ do { \
#define this_cpu_write_4(pcp, val) _percpu_write(pcp, val)
#define this_cpu_write_8(pcp, val) _percpu_write(pcp, val)
#define this_cpu_xchg_1(pcp, val) _percpu_xchg(pcp, val)
#define this_cpu_xchg_2(pcp, val) _percpu_xchg(pcp, val)
#define this_cpu_xchg_4(pcp, val) _percpu_xchg(pcp, val)
#define this_cpu_xchg_8(pcp, val) _percpu_xchg(pcp, val)
#define this_cpu_cmpxchg_1(ptr, o, n) _protect_cmpxchg_local(ptr, o, n)
#define this_cpu_cmpxchg_2(ptr, o, n) _protect_cmpxchg_local(ptr, o, n)
#define this_cpu_cmpxchg_4(ptr, o, n) _protect_cmpxchg_local(ptr, o, n)
#define this_cpu_cmpxchg_8(ptr, o, n) _protect_cmpxchg_local(ptr, o, n)

Просмотреть файл

@ -59,7 +59,6 @@
#include <linux/mm_types.h>
#include <linux/mmzone.h>
#include <asm/fixmap.h>
#include <asm/io.h>
struct mm_struct;
struct vm_area_struct;
@ -145,7 +144,7 @@ static inline void set_p4d(p4d_t *p4d, p4d_t p4dval)
*p4d = p4dval;
}
#define p4d_phys(p4d) virt_to_phys((void *)p4d_val(p4d))
#define p4d_phys(p4d) PHYSADDR(p4d_val(p4d))
#define p4d_page(p4d) (pfn_to_page(p4d_phys(p4d) >> PAGE_SHIFT))
#endif
@ -188,7 +187,7 @@ static inline pmd_t *pud_pgtable(pud_t pud)
#define set_pud(pudptr, pudval) do { *(pudptr) = (pudval); } while (0)
#define pud_phys(pud) virt_to_phys((void *)pud_val(pud))
#define pud_phys(pud) PHYSADDR(pud_val(pud))
#define pud_page(pud) (pfn_to_page(pud_phys(pud) >> PAGE_SHIFT))
#endif
@ -221,7 +220,7 @@ static inline void pmd_clear(pmd_t *pmdp)
#define set_pmd(pmdptr, pmdval) do { *(pmdptr) = (pmdval); } while (0)
#define pmd_phys(pmd) virt_to_phys((void *)pmd_val(pmd))
#define pmd_phys(pmd) PHYSADDR(pmd_val(pmd))
#ifndef CONFIG_TRANSPARENT_HUGEPAGE
#define pmd_page(pmd) (pfn_to_page(pmd_phys(pmd) >> PAGE_SHIFT))

Просмотреть файл

@ -1,10 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2020-2022 Loongson Technology Corporation Limited
*/
#ifndef _ASM_REBOOT_H
#define _ASM_REBOOT_H
extern void (*pm_restart)(void);
#endif /* _ASM_REBOOT_H */

Просмотреть файл

@ -48,7 +48,7 @@ void __init __acpi_unmap_table(void __iomem *map, unsigned long size)
early_memunmap(map, size);
}
void __init __iomem *acpi_os_ioremap(acpi_physical_address phys, acpi_size size)
void __iomem *acpi_os_ioremap(acpi_physical_address phys, acpi_size size)
{
if (!memblock_is_memory(phys))
return ioremap(phys, size);

Просмотреть файл

@ -15,10 +15,16 @@
#include <acpi/reboot.h>
#include <asm/idle.h>
#include <asm/loongarch.h>
#include <asm/reboot.h>
static void default_halt(void)
void (*pm_power_off)(void);
EXPORT_SYMBOL(pm_power_off);
void machine_halt(void)
{
#ifdef CONFIG_SMP
preempt_disable();
smp_send_stop();
#endif
local_irq_disable();
clear_csr_ecfg(ECFG0_IM);
@ -30,18 +36,29 @@ static void default_halt(void)
}
}
static void default_poweroff(void)
void machine_power_off(void)
{
#ifdef CONFIG_SMP
preempt_disable();
smp_send_stop();
#endif
do_kernel_power_off();
#ifdef CONFIG_EFI
efi.reset_system(EFI_RESET_SHUTDOWN, EFI_SUCCESS, 0, NULL);
#endif
while (true) {
__arch_cpu_idle();
}
}
static void default_restart(void)
void machine_restart(char *command)
{
#ifdef CONFIG_SMP
preempt_disable();
smp_send_stop();
#endif
do_kernel_restart(command);
#ifdef CONFIG_EFI
if (efi_capsule_pending(NULL))
efi_reboot(REBOOT_WARM, NULL);
@ -55,47 +72,3 @@ static void default_restart(void)
__arch_cpu_idle();
}
}
void (*pm_restart)(void);
EXPORT_SYMBOL(pm_restart);
void (*pm_power_off)(void);
EXPORT_SYMBOL(pm_power_off);
void machine_halt(void)
{
#ifdef CONFIG_SMP
preempt_disable();
smp_send_stop();
#endif
default_halt();
}
void machine_power_off(void)
{
#ifdef CONFIG_SMP
preempt_disable();
smp_send_stop();
#endif
pm_power_off();
}
void machine_restart(char *command)
{
#ifdef CONFIG_SMP
preempt_disable();
smp_send_stop();
#endif
do_kernel_restart(command);
pm_restart();
}
static int __init loongarch_reboot_setup(void)
{
pm_restart = default_restart;
pm_power_off = default_poweroff;
return 0;
}
arch_initcall(loongarch_reboot_setup);

Просмотреть файл

@ -529,11 +529,11 @@ static void handle_signal(struct ksignal *ksig, struct pt_regs *regs)
signal_setup_done(ret, ksig, 0);
}
void arch_do_signal_or_restart(struct pt_regs *regs, bool has_signal)
void arch_do_signal_or_restart(struct pt_regs *regs)
{
struct ksignal ksig;
if (has_signal && get_signal(&ksig)) {
if (get_signal(&ksig)) {
/* Whee! Actually deliver the signal. */
handle_signal(&ksig, regs);
return;

Просмотреть файл

@ -77,6 +77,8 @@ SECTIONS
PERCPU_SECTION(1 << CONFIG_L1_CACHE_SHIFT)
#endif
.rela.dyn : ALIGN(8) { *(.rela.dyn) *(.rela*) }
.init.bss : {
*(.init.bss)
}

Просмотреть файл

@ -18,11 +18,11 @@ void dump_tlb_regs(void)
{
const int field = 2 * sizeof(unsigned long);
pr_info("Index : %0x\n", read_csr_tlbidx());
pr_info("PageSize : %0x\n", read_csr_pagesize());
pr_info("EntryHi : %0*llx\n", field, read_csr_entryhi());
pr_info("EntryLo0 : %0*llx\n", field, read_csr_entrylo0());
pr_info("EntryLo1 : %0*llx\n", field, read_csr_entrylo1());
pr_info("Index : 0x%0x\n", read_csr_tlbidx());
pr_info("PageSize : 0x%0x\n", read_csr_pagesize());
pr_info("EntryHi : 0x%0*llx\n", field, read_csr_entryhi());
pr_info("EntryLo0 : 0x%0*llx\n", field, read_csr_entrylo0());
pr_info("EntryLo1 : 0x%0*llx\n", field, read_csr_entrylo1());
}
static void dump_tlb(int first, int last)
@ -33,8 +33,8 @@ static void dump_tlb(int first, int last)
unsigned int s_index, s_asid;
unsigned int pagesize, c0, c1, i;
unsigned long asidmask = cpu_asid_mask(&current_cpu_data);
int pwidth = 11;
int vwidth = 11;
int pwidth = 16;
int vwidth = 16;
int asidwidth = DIV_ROUND_UP(ilog2(asidmask) + 1, 4);
s_entryhi = read_csr_entryhi();
@ -64,22 +64,22 @@ static void dump_tlb(int first, int last)
/*
* Only print entries in use
*/
pr_info("Index: %2d pgsize=%x ", i, (1 << pagesize));
pr_info("Index: %4d pgsize=0x%x ", i, (1 << pagesize));
c0 = (entrylo0 & ENTRYLO_C) >> ENTRYLO_C_SHIFT;
c1 = (entrylo1 & ENTRYLO_C) >> ENTRYLO_C_SHIFT;
pr_cont("va=%0*lx asid=%0*lx",
pr_cont("va=0x%0*lx asid=0x%0*lx",
vwidth, (entryhi & ~0x1fffUL), asidwidth, asid & asidmask);
/* NR/NX are in awkward places, so mask them off separately */
pa = entrylo0 & ~(ENTRYLO_NR | ENTRYLO_NX);
pa = pa & PAGE_MASK;
pr_cont("\n\t[");
pr_cont("ri=%d xi=%d ",
pr_cont("nr=%d nx=%d ",
(entrylo0 & ENTRYLO_NR) ? 1 : 0,
(entrylo0 & ENTRYLO_NX) ? 1 : 0);
pr_cont("pa=%0*llx c=%d d=%d v=%d g=%d plv=%lld] [",
pr_cont("pa=0x%0*llx c=%d d=%d v=%d g=%d plv=%lld] [",
pwidth, pa, c0,
(entrylo0 & ENTRYLO_D) ? 1 : 0,
(entrylo0 & ENTRYLO_V) ? 1 : 0,
@ -88,10 +88,10 @@ static void dump_tlb(int first, int last)
/* NR/NX are in awkward places, so mask them off separately */
pa = entrylo1 & ~(ENTRYLO_NR | ENTRYLO_NX);
pa = pa & PAGE_MASK;
pr_cont("ri=%d xi=%d ",
pr_cont("nr=%d nx=%d ",
(entrylo1 & ENTRYLO_NR) ? 1 : 0,
(entrylo1 & ENTRYLO_NX) ? 1 : 0);
pr_cont("pa=%0*llx c=%d d=%d v=%d g=%d plv=%lld]\n",
pr_cont("pa=0x%0*llx c=%d d=%d v=%d g=%d plv=%lld]\n",
pwidth, pa, c1,
(entrylo1 & ENTRYLO_D) ? 1 : 0,
(entrylo1 & ENTRYLO_V) ? 1 : 0,

Просмотреть файл

@ -216,6 +216,10 @@ good_area:
return;
}
/* The fault is fully completed (including releasing mmap lock) */
if (fault & VM_FAULT_COMPLETED)
return;
if (unlikely(fault & VM_FAULT_RETRY)) {
flags |= FAULT_FLAG_TRIED;

Просмотреть файл

@ -131,18 +131,6 @@ int arch_add_memory(int nid, u64 start, u64 size, struct mhp_params *params)
return ret;
}
#ifdef CONFIG_NUMA
int memory_add_physaddr_to_nid(u64 start)
{
int nid;
nid = pa_to_nid(start);
return nid;
}
EXPORT_SYMBOL_GPL(memory_add_physaddr_to_nid);
#endif
#ifdef CONFIG_MEMORY_HOTREMOVE
void arch_remove_memory(u64 start, u64 size, struct vmem_altmap *altmap)
{
unsigned long start_pfn = start >> PAGE_SHIFT;
@ -154,6 +142,13 @@ void arch_remove_memory(u64 start, u64 size, struct vmem_altmap *altmap)
page += vmem_altmap_offset(altmap);
__remove_pages(start_pfn, nr_pages, altmap);
}
#ifdef CONFIG_NUMA
int memory_add_physaddr_to_nid(u64 start)
{
return pa_to_nid(start);
}
EXPORT_SYMBOL_GPL(memory_add_physaddr_to_nid);
#endif
#endif

Просмотреть файл

@ -2,16 +2,9 @@
/*
* Copyright (C) 2020-2022 Loongson Technology Corporation Limited
*/
#include <linux/compiler.h>
#include <linux/elf-randomize.h>
#include <linux/errno.h>
#include <linux/export.h>
#include <linux/mm.h>
#include <linux/mman.h>
#include <linux/export.h>
#include <linux/personality.h>
#include <linux/random.h>
#include <linux/sched/signal.h>
#include <linux/sched/mm.h>
unsigned long shm_align_mask = PAGE_SIZE - 1; /* Sane caches */
EXPORT_SYMBOL(shm_align_mask);
@ -120,6 +113,6 @@ int __virt_addr_valid(volatile void *kaddr)
if ((vaddr < PAGE_OFFSET) || (vaddr >= vm_map_base))
return 0;
return pfn_valid(PFN_DOWN(virt_to_phys(kaddr)));
return pfn_valid(PFN_DOWN(PHYSADDR(kaddr)));
}
EXPORT_SYMBOL_GPL(__virt_addr_valid);

Просмотреть файл

@ -24,6 +24,8 @@ static __always_inline const struct vdso_pcpu_data *get_pcpu_data(void)
return (struct vdso_pcpu_data *)(get_vdso_base() - VDSO_DATA_SIZE);
}
extern
int __vdso_getcpu(unsigned int *cpu, unsigned int *node, struct getcpu_cache *unused);
int __vdso_getcpu(unsigned int *cpu, unsigned int *node, struct getcpu_cache *unused)
{
int cpu_id;

Просмотреть файл

@ -6,20 +6,23 @@
*/
#include <linux/types.h>
int __vdso_clock_gettime(clockid_t clock,
struct __kernel_timespec *ts)
extern
int __vdso_clock_gettime(clockid_t clock, struct __kernel_timespec *ts);
int __vdso_clock_gettime(clockid_t clock, struct __kernel_timespec *ts)
{
return __cvdso_clock_gettime(clock, ts);
}
int __vdso_gettimeofday(struct __kernel_old_timeval *tv,
struct timezone *tz)
extern
int __vdso_gettimeofday(struct __kernel_old_timeval *tv, struct timezone *tz);
int __vdso_gettimeofday(struct __kernel_old_timeval *tv, struct timezone *tz)
{
return __cvdso_gettimeofday(tv, tz);
}
int __vdso_clock_getres(clockid_t clock_id,
struct __kernel_timespec *res)
extern
int __vdso_clock_getres(clockid_t clock_id, struct __kernel_timespec *res);
int __vdso_clock_getres(clockid_t clock_id, struct __kernel_timespec *res)
{
return __cvdso_clock_getres(clock_id, res);
}

Просмотреть файл

@ -157,11 +157,8 @@ arch___change_bit(unsigned long nr, volatile unsigned long *addr)
change_bit(nr, addr);
}
static __always_inline bool
arch_test_bit(unsigned long nr, const volatile unsigned long *addr)
{
return (addr[nr >> 5] & (1UL << (nr & 31))) != 0;
}
#define arch_test_bit generic_test_bit
#define arch_test_bit_acquire generic_test_bit_acquire
static inline int bset_reg_test_and_set_bit(int nr,
volatile unsigned long *vaddr)

Просмотреть файл

@ -84,8 +84,6 @@
#define KVM_MAX_VCPUS 16
/* memory slots that does not exposed to userspace */
#define KVM_PRIVATE_MEM_SLOTS 0
#define KVM_HALT_POLL_NS_DEFAULT 500000

Просмотреть файл

@ -615,17 +615,17 @@ retry:
* Used to check for invalidations in progress, of the pfn that is
* returned by pfn_to_pfn_prot below.
*/
mmu_seq = kvm->mmu_notifier_seq;
mmu_seq = kvm->mmu_invalidate_seq;
/*
* Ensure the read of mmu_notifier_seq isn't reordered with PTE reads in
* gfn_to_pfn_prot() (which calls get_user_pages()), so that we don't
* Ensure the read of mmu_invalidate_seq isn't reordered with PTE reads
* in gfn_to_pfn_prot() (which calls get_user_pages()), so that we don't
* risk the page we get a reference to getting unmapped before we have a
* chance to grab the mmu_lock without mmu_notifier_retry() noticing.
* chance to grab the mmu_lock without mmu_invalidate_retry() noticing.
*
* This smp_rmb() pairs with the effective smp_wmb() of the combination
* of the pte_unmap_unlock() after the PTE is zapped, and the
* spin_lock() in kvm_mmu_notifier_invalidate_<page|range_end>() before
* mmu_notifier_seq is incremented.
* mmu_invalidate_seq is incremented.
*/
smp_rmb();
@ -638,7 +638,7 @@ retry:
spin_lock(&kvm->mmu_lock);
/* Check if an invalidation has taken place since we got pfn */
if (mmu_notifier_retry(kvm, mmu_seq)) {
if (mmu_invalidate_retry(kvm, mmu_seq)) {
/*
* This can happen when mappings are changed asynchronously, but
* also synchronously if a COW is triggered by

Просмотреть файл

@ -50,7 +50,8 @@
stw r13, PT_R13(sp)
stw r14, PT_R14(sp)
stw r15, PT_R15(sp)
stw r2, PT_ORIG_R2(sp)
movi r24, -1
stw r24, PT_ORIG_R2(sp)
stw r7, PT_ORIG_R7(sp)
stw ra, PT_RA(sp)

Просмотреть файл

@ -74,6 +74,8 @@ extern void show_regs(struct pt_regs *);
((struct pt_regs *)((unsigned long)current_thread_info() + THREAD_SIZE)\
- 1)
#define force_successful_syscall_return() (current_pt_regs()->orig_r2 = -1)
int do_syscall_trace_enter(void);
void do_syscall_trace_exit(void);
#endif /* __ASSEMBLY__ */

Просмотреть файл

@ -185,6 +185,7 @@ ENTRY(handle_system_call)
ldw r5, PT_R5(sp)
local_restart:
stw r2, PT_ORIG_R2(sp)
/* Check that the requested system call is within limits */
movui r1, __NR_syscalls
bgeu r2, r1, ret_invsyscall
@ -192,7 +193,6 @@ local_restart:
movhi r11, %hiadj(sys_call_table)
add r1, r1, r11
ldw r1, %lo(sys_call_table)(r1)
beq r1, r0, ret_invsyscall
/* Check if we are being traced */
GET_THREAD_INFO r11
@ -213,6 +213,9 @@ local_restart:
translate_rc_and_ret:
movi r1, 0
bge r2, zero, 3f
ldw r1, PT_ORIG_R2(sp)
addi r1, r1, 1
beq r1, zero, 3f
sub r2, zero, r2
movi r1, 1
3:
@ -255,9 +258,9 @@ traced_system_call:
ldw r6, PT_R6(sp)
ldw r7, PT_R7(sp)
/* Fetch the syscall function, we don't need to check the boundaries
* since this is already done.
*/
/* Fetch the syscall function. */
movui r1, __NR_syscalls
bgeu r2, r1, traced_invsyscall
slli r1, r2, 2
movhi r11,%hiadj(sys_call_table)
add r1, r1, r11
@ -276,6 +279,9 @@ traced_system_call:
translate_rc_and_ret2:
movi r1, 0
bge r2, zero, 4f
ldw r1, PT_ORIG_R2(sp)
addi r1, r1, 1
beq r1, zero, 4f
sub r2, zero, r2
movi r1, 1
4:
@ -287,6 +293,11 @@ end_translate_rc_and_ret2:
RESTORE_SWITCH_STACK
br ret_from_exception
/* If the syscall number was invalid return ENOSYS */
traced_invsyscall:
movi r2, -ENOSYS
br translate_rc_and_ret2
Luser_return:
GET_THREAD_INFO r11 /* get thread_info pointer */
ldw r10, TI_FLAGS(r11) /* get thread_info->flags */
@ -336,9 +347,6 @@ external_interrupt:
/* skip if no interrupt is pending */
beq r12, r0, ret_from_interrupt
movi r24, -1
stw r24, PT_ORIG_R2(sp)
/*
* Process an external hardware interrupt.
*/

Просмотреть файл

@ -242,7 +242,7 @@ static int do_signal(struct pt_regs *regs)
/*
* If we were from a system call, check for system call restarting...
*/
if (regs->orig_r2 >= 0) {
if (regs->orig_r2 >= 0 && regs->r1) {
continue_addr = regs->ea;
restart_addr = continue_addr - 4;
retval = regs->r2;
@ -264,6 +264,7 @@ static int do_signal(struct pt_regs *regs)
regs->ea = restart_addr;
break;
}
regs->orig_r2 = -1;
}
if (get_signal(&ksig)) {

Просмотреть файл

@ -13,5 +13,6 @@
#define __SYSCALL(nr, call) [nr] = (call),
void *sys_call_table[__NR_syscalls] = {
[0 ... __NR_syscalls-1] = sys_ni_syscall,
#include <asm/unistd.h>
};

Просмотреть файл

@ -146,10 +146,10 @@ menu "Processor type and features"
choice
prompt "Processor type"
default PA7000
default PA7000 if "$(ARCH)" = "parisc"
config PA7000
bool "PA7000/PA7100"
bool "PA7000/PA7100" if "$(ARCH)" = "parisc"
help
This is the processor type of your CPU. This information is
used for optimizing purposes. In order to compile a kernel
@ -160,21 +160,21 @@ config PA7000
which is required on some machines.
config PA7100LC
bool "PA7100LC"
bool "PA7100LC" if "$(ARCH)" = "parisc"
help
Select this option for the PCX-L processor, as used in the
712, 715/64, 715/80, 715/100, 715/100XC, 725/100, 743, 748,
D200, D210, D300, D310 and E-class
config PA7200
bool "PA7200"
bool "PA7200" if "$(ARCH)" = "parisc"
help
Select this option for the PCX-T' processor, as used in the
C100, C110, J100, J110, J210XC, D250, D260, D350, D360,
K100, K200, K210, K220, K400, K410 and K420
config PA7300LC
bool "PA7300LC"
bool "PA7300LC" if "$(ARCH)" = "parisc"
help
Select this option for the PCX-L2 processor, as used in the
744, A180, B132L, B160L, B180L, C132L, C160L, C180L,
@ -224,17 +224,8 @@ config MLONGCALLS
Enabling this option will probably slow down your kernel.
config 64BIT
bool "64-bit kernel"
def_bool "$(ARCH)" = "parisc64"
depends on PA8X00
help
Enable this if you want to support 64bit kernel on PA-RISC platform.
At the moment, only people willing to use more than 2GB of RAM,
or having a 64bit-only capable PA-RISC machine should say Y here.
Since there is no 64bit userland on PA-RISC, there is no point to
enable this option otherwise. The 64bit kernel is significantly bigger
and slower than the 32bit one.
choice
prompt "Kernel page size"

Просмотреть файл

@ -12,14 +12,6 @@
#include <asm/barrier.h>
#include <linux/atomic.h>
/* compiler build environment sanity checks: */
#if !defined(CONFIG_64BIT) && defined(__LP64__)
#error "Please use 'ARCH=parisc' to build the 32-bit kernel."
#endif
#if defined(CONFIG_64BIT) && !defined(__LP64__)
#error "Please use 'ARCH=parisc64' to build the 64-bit kernel."
#endif
/* See http://marc.theaimsgroup.com/?t=108826637900003 for discussion
* on use of volatile and __*_bit() (set/clear/change):
* *_bit() want use of volatile.

Просмотреть файл

@ -22,7 +22,7 @@
#include <linux/init.h>
#include <linux/pgtable.h>
.level PA_ASM_LEVEL
.level 1.1
__INITDATA
ENTRY(boot_args)
@ -70,6 +70,47 @@ $bss_loop:
stw,ma %arg2,4(%r1)
stw,ma %arg3,4(%r1)
#if !defined(CONFIG_64BIT) && defined(CONFIG_PA20)
/* This 32-bit kernel was compiled for PA2.0 CPUs. Check current CPU
* and halt kernel if we detect a PA1.x CPU. */
ldi 32,%r10
mtctl %r10,%cr11
.level 2.0
mfctl,w %cr11,%r10
.level 1.1
comib,<>,n 0,%r10,$cpu_ok
load32 PA(msg1),%arg0
ldi msg1_end-msg1,%arg1
$iodc_panic:
copy %arg0, %r10
copy %arg1, %r11
load32 PA(init_stack),%sp
#define MEM_CONS 0x3A0
ldw MEM_CONS+32(%r0),%arg0 // HPA
ldi ENTRY_IO_COUT,%arg1
ldw MEM_CONS+36(%r0),%arg2 // SPA
ldw MEM_CONS+8(%r0),%arg3 // layers
load32 PA(__bss_start),%r1
stw %r1,-52(%sp) // arg4
stw %r0,-56(%sp) // arg5
stw %r10,-60(%sp) // arg6 = ptr to text
stw %r11,-64(%sp) // arg7 = len
stw %r0,-68(%sp) // arg8
load32 PA(.iodc_panic_ret), %rp
ldw MEM_CONS+40(%r0),%r1 // ENTRY_IODC
bv,n (%r1)
.iodc_panic_ret:
b . /* wait endless with ... */
or %r10,%r10,%r10 /* qemu idle sleep */
msg1: .ascii "Can't boot kernel which was built for PA8x00 CPUs on this machine.\r\n"
msg1_end:
$cpu_ok:
#endif
.level PA_ASM_LEVEL
/* Initialize startup VM. Just map first 16/32 MB of memory */
load32 PA(swapper_pg_dir),%r4
mtctl %r4,%cr24 /* Initialize kernel root pointer */

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше