Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

No conflicts.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>
This commit is contained in:
Jakub Kicinski 2022-07-21 13:03:39 -07:00
Родитель 5588d62802 7ca433dc6d
Коммит 6e0e846ee2
267 изменённых файлов: 2630 добавлений и 1722 удалений

Просмотреть файл

@ -627,6 +627,10 @@ S: 48287 Sawleaf
S: Fremont, California 94539
S: USA
N: Tomas Cech
E: sleep_walker@suse.com
D: arm/palm treo support
N: Florent Chabaud
E: florent.chabaud@polytechnique.org
D: software suspend

Просмотреть файл

@ -94,6 +94,7 @@ if:
- allwinner,sun8i-a83t-display-engine
- allwinner,sun8i-r40-display-engine
- allwinner,sun9i-a80-display-engine
- allwinner,sun20i-d1-display-engine
- allwinner,sun50i-a64-display-engine
then:

Просмотреть файл

@ -301,7 +301,7 @@ through which it can issue requests and negotiate::
void (*issue_read)(struct netfs_io_subrequest *subreq);
bool (*is_still_valid)(struct netfs_io_request *rreq);
int (*check_write_begin)(struct file *file, loff_t pos, unsigned len,
struct folio *folio, void **_fsdata);
struct folio **foliop, void **_fsdata);
void (*done)(struct netfs_io_request *rreq);
};
@ -381,8 +381,10 @@ The operations are as follows:
allocated/grabbed the folio to be modified to allow the filesystem to flush
conflicting state before allowing it to be modified.
It should return 0 if everything is now fine, -EAGAIN if the folio should be
regrabbed and any other error code to abort the operation.
It may unlock and discard the folio it was given and set the caller's folio
pointer to NULL. It should return 0 if everything is now fine (``*foliop``
left set) or the op should be retried (``*foliop`` cleared) and any other
error code to abort the operation.
* ``done``

Просмотреть файл

@ -503,26 +503,108 @@ per-port PHY specific details: interface connection, MDIO bus location, etc.
Driver development
==================
DSA switch drivers need to implement a dsa_switch_ops structure which will
DSA switch drivers need to implement a ``dsa_switch_ops`` structure which will
contain the various members described below.
``register_switch_driver()`` registers this dsa_switch_ops in its internal list
of drivers to probe for. ``unregister_switch_driver()`` does the exact opposite.
Probing, registration and device lifetime
-----------------------------------------
Unless requested differently by setting the priv_size member accordingly, DSA
does not allocate any driver private context space.
DSA switches are regular ``device`` structures on buses (be they platform, SPI,
I2C, MDIO or otherwise). The DSA framework is not involved in their probing
with the device core.
Switch registration from the perspective of a driver means passing a valid
``struct dsa_switch`` pointer to ``dsa_register_switch()``, usually from the
switch driver's probing function. The following members must be valid in the
provided structure:
- ``ds->dev``: will be used to parse the switch's OF node or platform data.
- ``ds->num_ports``: will be used to create the port list for this switch, and
to validate the port indices provided in the OF node.
- ``ds->ops``: a pointer to the ``dsa_switch_ops`` structure holding the DSA
method implementations.
- ``ds->priv``: backpointer to a driver-private data structure which can be
retrieved in all further DSA method callbacks.
In addition, the following flags in the ``dsa_switch`` structure may optionally
be configured to obtain driver-specific behavior from the DSA core. Their
behavior when set is documented through comments in ``include/net/dsa.h``.
- ``ds->vlan_filtering_is_global``
- ``ds->needs_standalone_vlan_filtering``
- ``ds->configure_vlan_while_not_filtering``
- ``ds->untag_bridge_pvid``
- ``ds->assisted_learning_on_cpu_port``
- ``ds->mtu_enforcement_ingress``
- ``ds->fdb_isolation``
Internally, DSA keeps an array of switch trees (group of switches) global to
the kernel, and attaches a ``dsa_switch`` structure to a tree on registration.
The tree ID to which the switch is attached is determined by the first u32
number of the ``dsa,member`` property of the switch's OF node (0 if missing).
The switch ID within the tree is determined by the second u32 number of the
same OF property (0 if missing). Registering multiple switches with the same
switch ID and tree ID is illegal and will cause an error. Using platform data,
a single switch and a single switch tree is permitted.
In case of a tree with multiple switches, probing takes place asymmetrically.
The first N-1 callers of ``dsa_register_switch()`` only add their ports to the
port list of the tree (``dst->ports``), each port having a backpointer to its
associated switch (``dp->ds``). Then, these switches exit their
``dsa_register_switch()`` call early, because ``dsa_tree_setup_routing_table()``
has determined that the tree is not yet complete (not all ports referenced by
DSA links are present in the tree's port list). The tree becomes complete when
the last switch calls ``dsa_register_switch()``, and this triggers the effective
continuation of initialization (including the call to ``ds->ops->setup()``) for
all switches within that tree, all as part of the calling context of the last
switch's probe function.
The opposite of registration takes place when calling ``dsa_unregister_switch()``,
which removes a switch's ports from the port list of the tree. The entire tree
is torn down when the first switch unregisters.
It is mandatory for DSA switch drivers to implement the ``shutdown()`` callback
of their respective bus, and call ``dsa_switch_shutdown()`` from it (a minimal
version of the full teardown performed by ``dsa_unregister_switch()``).
The reason is that DSA keeps a reference on the master net device, and if the
driver for the master device decides to unbind on shutdown, DSA's reference
will block that operation from finalizing.
Either ``dsa_switch_shutdown()`` or ``dsa_unregister_switch()`` must be called,
but not both, and the device driver model permits the bus' ``remove()`` method
to be called even if ``shutdown()`` was already called. Therefore, drivers are
expected to implement a mutual exclusion method between ``remove()`` and
``shutdown()`` by setting their drvdata to NULL after any of these has run, and
checking whether the drvdata is NULL before proceeding to take any action.
After ``dsa_switch_shutdown()`` or ``dsa_unregister_switch()`` was called, no
further callbacks via the provided ``dsa_switch_ops`` may take place, and the
driver may free the data structures associated with the ``dsa_switch``.
Switch configuration
--------------------
- ``tag_protocol``: this is to indicate what kind of tagging protocol is supported,
should be a valid value from the ``dsa_tag_protocol`` enum
- ``get_tag_protocol``: this is to indicate what kind of tagging protocol is
supported, should be a valid value from the ``dsa_tag_protocol`` enum.
The returned information does not have to be static; the driver is passed the
CPU port number, as well as the tagging protocol of a possibly stacked
upstream switch, in case there are hardware limitations in terms of supported
tag formats.
- ``probe``: probe routine which will be invoked by the DSA platform device upon
registration to test for the presence/absence of a switch device. For MDIO
devices, it is recommended to issue a read towards internal registers using
the switch pseudo-PHY and return whether this is a supported device. For other
buses, return a non-NULL string
- ``change_tag_protocol``: when the default tagging protocol has compatibility
problems with the master or other issues, the driver may support changing it
at runtime, either through a device tree property or through sysfs. In that
case, further calls to ``get_tag_protocol`` should report the protocol in
current use.
- ``setup``: setup function for the switch, this function is responsible for setting
up the ``dsa_switch_ops`` private structure with all it needs: register maps,
@ -535,7 +617,17 @@ Switch configuration
fully configured and ready to serve any kind of request. It is recommended
to issue a software reset of the switch during this setup function in order to
avoid relying on what a previous software agent such as a bootloader/firmware
may have previously configured.
may have previously configured. The method responsible for undoing any
applicable allocations or operations done here is ``teardown``.
- ``port_setup`` and ``port_teardown``: methods for initialization and
destruction of per-port data structures. It is mandatory for some operations
such as registering and unregistering devlink port regions to be done from
these methods, otherwise they are optional. A port will be torn down only if
it has been previously set up. It is possible for a port to be set up during
probing only to be torn down immediately afterwards, for example in case its
PHY cannot be found. In this case, probing of the DSA switch continues
without that particular port.
PHY devices and link management
-------------------------------
@ -635,26 +727,198 @@ Power management
``BR_STATE_DISABLED`` and propagating changes to the hardware if this port is
disabled while being a bridge member
Address databases
-----------------
Switching hardware is expected to have a table for FDB entries, however not all
of them are active at the same time. An address database is the subset (partition)
of FDB entries that is active (can be matched by address learning on RX, or FDB
lookup on TX) depending on the state of the port. An address database may
occasionally be called "FID" (Filtering ID) in this document, although the
underlying implementation may choose whatever is available to the hardware.
For example, all ports that belong to a VLAN-unaware bridge (which is
*currently* VLAN-unaware) are expected to learn source addresses in the
database associated by the driver with that bridge (and not with other
VLAN-unaware bridges). During forwarding and FDB lookup, a packet received on a
VLAN-unaware bridge port should be able to find a VLAN-unaware FDB entry having
the same MAC DA as the packet, which is present on another port member of the
same bridge. At the same time, the FDB lookup process must be able to not find
an FDB entry having the same MAC DA as the packet, if that entry points towards
a port which is a member of a different VLAN-unaware bridge (and is therefore
associated with a different address database).
Similarly, each VLAN of each offloaded VLAN-aware bridge should have an
associated address database, which is shared by all ports which are members of
that VLAN, but not shared by ports belonging to different bridges that are
members of the same VID.
In this context, a VLAN-unaware database means that all packets are expected to
match on it irrespective of VLAN ID (only MAC address lookup), whereas a
VLAN-aware database means that packets are supposed to match based on the VLAN
ID from the classified 802.1Q header (or the pvid if untagged).
At the bridge layer, VLAN-unaware FDB entries have the special VID value of 0,
whereas VLAN-aware FDB entries have non-zero VID values. Note that a
VLAN-unaware bridge may have VLAN-aware (non-zero VID) FDB entries, and a
VLAN-aware bridge may have VLAN-unaware FDB entries. As in hardware, the
software bridge keeps separate address databases, and offloads to hardware the
FDB entries belonging to these databases, through switchdev, asynchronously
relative to the moment when the databases become active or inactive.
When a user port operates in standalone mode, its driver should configure it to
use a separate database called a port private database. This is different from
the databases described above, and should impede operation as standalone port
(packet in, packet out to the CPU port) as little as possible. For example,
on ingress, it should not attempt to learn the MAC SA of ingress traffic, since
learning is a bridging layer service and this is a standalone port, therefore
it would consume useless space. With no address learning, the port private
database should be empty in a naive implementation, and in this case, all
received packets should be trivially flooded to the CPU port.
DSA (cascade) and CPU ports are also called "shared" ports because they service
multiple address databases, and the database that a packet should be associated
to is usually embedded in the DSA tag. This means that the CPU port may
simultaneously transport packets coming from a standalone port (which were
classified by hardware in one address database), and from a bridge port (which
were classified to a different address database).
Switch drivers which satisfy certain criteria are able to optimize the naive
configuration by removing the CPU port from the flooding domain of the switch,
and just program the hardware with FDB entries pointing towards the CPU port
for which it is known that software is interested in those MAC addresses.
Packets which do not match a known FDB entry will not be delivered to the CPU,
which will save CPU cycles required for creating an skb just to drop it.
DSA is able to perform host address filtering for the following kinds of
addresses:
- Primary unicast MAC addresses of ports (``dev->dev_addr``). These are
associated with the port private database of the respective user port,
and the driver is notified to install them through ``port_fdb_add`` towards
the CPU port.
- Secondary unicast and multicast MAC addresses of ports (addresses added
through ``dev_uc_add()`` and ``dev_mc_add()``). These are also associated
with the port private database of the respective user port.
- Local/permanent bridge FDB entries (``BR_FDB_LOCAL``). These are the MAC
addresses of the bridge ports, for which packets must be terminated locally
and not forwarded. They are associated with the address database for that
bridge.
- Static bridge FDB entries installed towards foreign (non-DSA) interfaces
present in the same bridge as some DSA switch ports. These are also
associated with the address database for that bridge.
- Dynamically learned FDB entries on foreign interfaces present in the same
bridge as some DSA switch ports, only if ``ds->assisted_learning_on_cpu_port``
is set to true by the driver. These are associated with the address database
for that bridge.
For various operations detailed below, DSA provides a ``dsa_db`` structure
which can be of the following types:
- ``DSA_DB_PORT``: the FDB (or MDB) entry to be installed or deleted belongs to
the port private database of user port ``db->dp``.
- ``DSA_DB_BRIDGE``: the entry belongs to one of the address databases of bridge
``db->bridge``. Separation between the VLAN-unaware database and the per-VID
databases of this bridge is expected to be done by the driver.
- ``DSA_DB_LAG``: the entry belongs to the address database of LAG ``db->lag``.
Note: ``DSA_DB_LAG`` is currently unused and may be removed in the future.
The drivers which act upon the ``dsa_db`` argument in ``port_fdb_add``,
``port_mdb_add`` etc should declare ``ds->fdb_isolation`` as true.
DSA associates each offloaded bridge and each offloaded LAG with a one-based ID
(``struct dsa_bridge :: num``, ``struct dsa_lag :: id``) for the purposes of
refcounting addresses on shared ports. Drivers may piggyback on DSA's numbering
scheme (the ID is readable through ``db->bridge.num`` and ``db->lag.id`` or may
implement their own.
Only the drivers which declare support for FDB isolation are notified of FDB
entries on the CPU port belonging to ``DSA_DB_PORT`` databases.
For compatibility/legacy reasons, ``DSA_DB_BRIDGE`` addresses are notified to
drivers even if they do not support FDB isolation. However, ``db->bridge.num``
and ``db->lag.id`` are always set to 0 in that case (to denote the lack of
isolation, for refcounting purposes).
Note that it is not mandatory for a switch driver to implement physically
separate address databases for each standalone user port. Since FDB entries in
the port private databases will always point to the CPU port, there is no risk
for incorrect forwarding decisions. In this case, all standalone ports may
share the same database, but the reference counting of host-filtered addresses
(not deleting the FDB entry for a port's MAC address if it's still in use by
another port) becomes the responsibility of the driver, because DSA is unaware
that the port databases are in fact shared. This can be achieved by calling
``dsa_fdb_present_in_other_db()`` and ``dsa_mdb_present_in_other_db()``.
The down side is that the RX filtering lists of each user port are in fact
shared, which means that user port A may accept a packet with a MAC DA it
shouldn't have, only because that MAC address was in the RX filtering list of
user port B. These packets will still be dropped in software, however.
Bridge layer
------------
Offloading the bridge forwarding plane is optional and handled by the methods
below. They may be absent, return -EOPNOTSUPP, or ``ds->max_num_bridges`` may
be non-zero and exceeded, and in this case, joining a bridge port is still
possible, but the packet forwarding will take place in software, and the ports
under a software bridge must remain configured in the same way as for
standalone operation, i.e. have all bridging service functions (address
learning etc) disabled, and send all received packets to the CPU port only.
Concretely, a port starts offloading the forwarding plane of a bridge once it
returns success to the ``port_bridge_join`` method, and stops doing so after
``port_bridge_leave`` has been called. Offloading the bridge means autonomously
learning FDB entries in accordance with the software bridge port's state, and
autonomously forwarding (or flooding) received packets without CPU intervention.
This is optional even when offloading a bridge port. Tagging protocol drivers
are expected to call ``dsa_default_offload_fwd_mark(skb)`` for packets which
have already been autonomously forwarded in the forwarding domain of the
ingress switch port. DSA, through ``dsa_port_devlink_setup()``, considers all
switch ports part of the same tree ID to be part of the same bridge forwarding
domain (capable of autonomous forwarding to each other).
Offloading the TX forwarding process of a bridge is a distinct concept from
simply offloading its forwarding plane, and refers to the ability of certain
driver and tag protocol combinations to transmit a single skb coming from the
bridge device's transmit function to potentially multiple egress ports (and
thereby avoid its cloning in software).
Packets for which the bridge requests this behavior are called data plane
packets and have ``skb->offload_fwd_mark`` set to true in the tag protocol
driver's ``xmit`` function. Data plane packets are subject to FDB lookup,
hardware learning on the CPU port, and do not override the port STP state.
Additionally, replication of data plane packets (multicast, flooding) is
handled in hardware and the bridge driver will transmit a single skb for each
packet that may or may not need replication.
When the TX forwarding offload is enabled, the tag protocol driver is
responsible to inject packets into the data plane of the hardware towards the
correct bridging domain (FID) that the port is a part of. The port may be
VLAN-unaware, and in this case the FID must be equal to the FID used by the
driver for its VLAN-unaware address database associated with that bridge.
Alternatively, the bridge may be VLAN-aware, and in that case, it is guaranteed
that the packet is also VLAN-tagged with the VLAN ID that the bridge processed
this packet in. It is the responsibility of the hardware to untag the VID on
the egress-untagged ports, or keep the tag on the egress-tagged ones.
- ``port_bridge_join``: bridge layer function invoked when a given switch port is
added to a bridge, this function should do what's necessary at the switch
level to permit the joining port to be added to the relevant logical
domain for it to ingress/egress traffic with other members of the bridge.
By setting the ``tx_fwd_offload`` argument to true, the TX forwarding process
of this bridge is also offloaded.
- ``port_bridge_leave``: bridge layer function invoked when a given switch port is
removed from a bridge, this function should do what's necessary at the
switch level to deny the leaving port from ingress/egress traffic from the
remaining bridge members. When the port leaves the bridge, it should be aged
out at the switch hardware for the switch to (re) learn MAC addresses behind
this port.
remaining bridge members.
- ``port_stp_state_set``: bridge layer function invoked when a given switch port STP
state is computed by the bridge layer and should be propagated to switch
hardware to forward/block/learn traffic. The switch driver is responsible for
computing a STP state change based on current and asked parameters and perform
the relevant ageing based on the intersection results
hardware to forward/block/learn traffic.
- ``port_bridge_flags``: bridge layer function invoked when a port must
configure its settings for e.g. flooding of unknown traffic or source address
@ -667,21 +931,11 @@ Bridge layer
CPU port, and flooding towards the CPU port should also be enabled, due to a
lack of an explicit address filtering mechanism in the DSA core.
- ``port_bridge_tx_fwd_offload``: bridge layer function invoked after
``port_bridge_join`` when a driver sets ``ds->num_fwd_offloading_bridges`` to
a non-zero value. Returning success in this function activates the TX
forwarding offload bridge feature for this port, which enables the tagging
protocol driver to inject data plane packets towards the bridging domain that
the port is a part of. Data plane packets are subject to FDB lookup, hardware
learning on the CPU port, and do not override the port STP state.
Additionally, replication of data plane packets (multicast, flooding) is
handled in hardware and the bridge driver will transmit a single skb for each
packet that needs replication. The method is provided as a configuration
point for drivers that need to configure the hardware for enabling this
feature.
- ``port_bridge_tx_fwd_unoffload``: bridge layer function invoked when a driver
leaves a bridge port which had the TX forwarding offload feature enabled.
- ``port_fast_age``: bridge layer function invoked when flushing the
dynamically learned FDB entries on the port is necessary. This is called when
transitioning from an STP state where learning should take place to an STP
state where it shouldn't, or when leaving a bridge, or when address learning
is turned off via ``port_bridge_flags``.
Bridge VLAN filtering
---------------------
@ -697,55 +951,44 @@ Bridge VLAN filtering
allowed.
- ``port_vlan_add``: bridge layer function invoked when a VLAN is configured
(tagged or untagged) for the given switch port. If the operation is not
supported by the hardware, this function should return ``-EOPNOTSUPP`` to
inform the bridge code to fallback to a software implementation.
(tagged or untagged) for the given switch port. The CPU port becomes a member
of a VLAN only if a foreign bridge port is also a member of it (and
forwarding needs to take place in software), or the VLAN is installed to the
VLAN group of the bridge device itself, for termination purposes
(``bridge vlan add dev br0 vid 100 self``). VLANs on shared ports are
reference counted and removed when there is no user left. Drivers do not need
to manually install a VLAN on the CPU port.
- ``port_vlan_del``: bridge layer function invoked when a VLAN is removed from the
given switch port
- ``port_vlan_dump``: bridge layer function invoked with a switchdev callback
function that the driver has to call for each VLAN the given port is a member
of. A switchdev object is used to carry the VID and bridge flags.
- ``port_fdb_add``: bridge layer function invoked when the bridge wants to install a
Forwarding Database entry, the switch hardware should be programmed with the
specified address in the specified VLAN Id in the forwarding database
associated with this VLAN ID. If the operation is not supported, this
function should return ``-EOPNOTSUPP`` to inform the bridge code to fallback to
a software implementation.
.. note:: VLAN ID 0 corresponds to the port private database, which, in the context
of DSA, would be its port-based VLAN, used by the associated bridge device.
associated with this VLAN ID.
- ``port_fdb_del``: bridge layer function invoked when the bridge wants to remove a
Forwarding Database entry, the switch hardware should be programmed to delete
the specified MAC address from the specified VLAN ID if it was mapped into
this port forwarding database
- ``port_fdb_dump``: bridge layer function invoked with a switchdev callback
function that the driver has to call for each MAC address known to be behind
the given port. A switchdev object is used to carry the VID and FDB info.
- ``port_fdb_dump``: bridge bypass function invoked by ``ndo_fdb_dump`` on the
physical DSA port interfaces. Since DSA does not attempt to keep in sync its
hardware FDB entries with the software bridge, this method is implemented as
a means to view the entries visible on user ports in the hardware database.
The entries reported by this function have the ``self`` flag in the output of
the ``bridge fdb show`` command.
- ``port_mdb_add``: bridge layer function invoked when the bridge wants to install
a multicast database entry. If the operation is not supported, this function
should return ``-EOPNOTSUPP`` to inform the bridge code to fallback to a
software implementation. The switch hardware should be programmed with the
a multicast database entry. The switch hardware should be programmed with the
specified address in the specified VLAN ID in the forwarding database
associated with this VLAN ID.
.. note:: VLAN ID 0 corresponds to the port private database, which, in the context
of DSA, would be its port-based VLAN, used by the associated bridge device.
- ``port_mdb_del``: bridge layer function invoked when the bridge wants to remove a
multicast database entry, the switch hardware should be programmed to delete
the specified MAC address from the specified VLAN ID if it was mapped into
this port forwarding database.
- ``port_mdb_dump``: bridge layer function invoked with a switchdev callback
function that the driver has to call for each MAC address known to be behind
the given port. A switchdev object is used to carry the VID and MDB info.
Link aggregation
----------------

Просмотреть файл

@ -1058,11 +1058,7 @@ udp_rmem_min - INTEGER
Default: 4K
udp_wmem_min - INTEGER
Minimal size of send buffer used by UDP sockets in moderation.
Each UDP socket is able to use the size for sending data, even if
total pages of UDP sockets exceed udp_mem pressure. The unit is byte.
Default: 4K
UDP does not have tx memory accounting and this tunable has no effect.
RAW variables
=============

Просмотреть файл

@ -5657,6 +5657,7 @@ by a string of size ``name_size``.
#define KVM_STATS_UNIT_BYTES (0x1 << KVM_STATS_UNIT_SHIFT)
#define KVM_STATS_UNIT_SECONDS (0x2 << KVM_STATS_UNIT_SHIFT)
#define KVM_STATS_UNIT_CYCLES (0x3 << KVM_STATS_UNIT_SHIFT)
#define KVM_STATS_UNIT_BOOLEAN (0x4 << KVM_STATS_UNIT_SHIFT)
#define KVM_STATS_UNIT_MAX KVM_STATS_UNIT_CYCLES
#define KVM_STATS_BASE_SHIFT 8
@ -5702,14 +5703,13 @@ Bits 0-3 of ``flags`` encode the type:
by the ``hist_param`` field. The range of the Nth bucket (1 <= N < ``size``)
is [``hist_param``*(N-1), ``hist_param``*N), while the range of the last
bucket is [``hist_param``*(``size``-1), +INF). (+INF means positive infinity
value.) The bucket value indicates how many samples fell in the bucket's range.
value.)
* ``KVM_STATS_TYPE_LOG_HIST``
The statistic is reported as a logarithmic histogram. The number of
buckets is specified by the ``size`` field. The range of the first bucket is
[0, 1), while the range of the last bucket is [pow(2, ``size``-2), +INF).
Otherwise, The Nth bucket (1 < N < ``size``) covers
[pow(2, N-2), pow(2, N-1)). The bucket value indicates how many samples fell
in the bucket's range.
[pow(2, N-2), pow(2, N-1)).
Bits 4-7 of ``flags`` encode the unit:
@ -5724,6 +5724,15 @@ Bits 4-7 of ``flags`` encode the unit:
It indicates that the statistics data is used to measure time or latency.
* ``KVM_STATS_UNIT_CYCLES``
It indicates that the statistics data is used to measure CPU clock cycles.
* ``KVM_STATS_UNIT_BOOLEAN``
It indicates that the statistic will always be either 0 or 1. Boolean
statistics of "peak" type will never go back from 1 to 0. Boolean
statistics can be linear histograms (with two buckets) but not logarithmic
histograms.
Note that, in the case of histograms, the unit applies to the bucket
ranges, while the bucket value indicates how many samples fell in the
bucket's range.
Bits 8-11 of ``flags``, together with ``exponent``, encode the scale of the
unit:
@ -5746,7 +5755,7 @@ the corresponding statistics data.
The ``bucket_size`` field is used as a parameter for histogram statistics data.
It is only used by linear histogram statistics data, specifying the size of a
bucket.
bucket in the unit expressed by bits 4-11 of ``flags`` together with ``exponent``.
The ``name`` field is the name string of the statistics data. The name string
starts at the end of ``struct kvm_stats_desc``. The maximum length including

Просмотреть файл

@ -2497,10 +2497,8 @@ F: drivers/power/reset/oxnas-restart.c
N: oxnas
ARM/PALM TREO SUPPORT
M: Tomas Cech <sleep_walker@suse.com>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
W: http://hackndev.com
S: Orphan
F: arch/arm/mach-pxa/palmtreo.*
ARM/PALMTX,PALMT5,PALMLD,PALMTE2,PALMTC SUPPORT
@ -14377,7 +14375,8 @@ S: Maintained
F: drivers/net/phy/nxp-c45-tja11xx.c
NXP FSPI DRIVER
M: Ashish Kumar <ashish.kumar@nxp.com>
M: Han Xu <han.xu@nxp.com>
M: Haibo Chen <haibo.chen@nxp.com>
R: Yogesh Gaur <yogeshgaur.83@gmail.com>
L: linux-spi@vger.kernel.org
S: Maintained
@ -17300,12 +17299,15 @@ N: riscv
K: riscv
RISC-V/MICROCHIP POLARFIRE SOC SUPPORT
M: Lewis Hanly <lewis.hanly@microchip.com>
M: Conor Dooley <conor.dooley@microchip.com>
M: Daire McNamara <daire.mcnamara@microchip.com>
L: linux-riscv@lists.infradead.org
S: Supported
F: arch/riscv/boot/dts/microchip/
F: drivers/char/hw_random/mpfs-rng.c
F: drivers/clk/microchip/clk-mpfs.c
F: drivers/mailbox/mailbox-mpfs.c
F: drivers/pci/controller/pcie-microchip-host.c
F: drivers/soc/microchip/
F: include/soc/microchip/mpfs.h

Просмотреть файл

@ -2,7 +2,7 @@
VERSION = 5
PATCHLEVEL = 19
SUBLEVEL = 0
EXTRAVERSION = -rc6
EXTRAVERSION = -rc7
NAME = Superb Owl
# *DOCUMENTATION*

Просмотреть файл

@ -438,6 +438,13 @@ config MMU_GATHER_PAGE_SIZE
config MMU_GATHER_NO_RANGE
bool
select MMU_GATHER_MERGE_VMAS
config MMU_GATHER_NO_FLUSH_CACHE
bool
config MMU_GATHER_MERGE_VMAS
bool
config MMU_GATHER_NO_GATHER
bool

Просмотреть файл

@ -226,7 +226,7 @@
reg = <0x28>;
#gpio-cells = <2>;
gpio-controller;
ngpio = <32>;
ngpios = <62>;
};
sgtl5000: codec@a {

Просмотреть файл

@ -166,7 +166,7 @@
atmel_mxt_ts: touchscreen@4a {
compatible = "atmel,maxtouch";
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_atmel_conn>;
pinctrl-0 = <&pinctrl_atmel_conn &pinctrl_atmel_snvs_conn>;
reg = <0x4a>;
interrupt-parent = <&gpio5>;
interrupts = <4 IRQ_TYPE_EDGE_FALLING>; /* SODIMM 107 / INT */
@ -331,7 +331,6 @@
pinctrl_atmel_conn: atmelconngrp {
fsl,pins = <
MX6UL_PAD_JTAG_MOD__GPIO1_IO10 0xb0a0 /* SODIMM 106 */
MX6ULL_PAD_SNVS_TAMPER4__GPIO5_IO04 0xb0a0 /* SODIMM 107 */
>;
};
@ -684,6 +683,12 @@
};
&iomuxc_snvs {
pinctrl_atmel_snvs_conn: atmelsnvsconngrp {
fsl,pins = <
MX6ULL_PAD_SNVS_TAMPER4__GPIO5_IO04 0xb0a0 /* SODIMM 107 */
>;
};
pinctrl_snvs_gpio1: snvsgpio1grp {
fsl,pins = <
MX6ULL_PAD_SNVS_TAMPER6__GPIO5_IO06 0x110a0 /* SODIMM 93 */

Просмотреть файл

@ -87,22 +87,22 @@
phy4: ethernet-phy@5 {
reg = <5>;
coma-mode-gpios = <&gpio 37 GPIO_ACTIVE_HIGH>;
coma-mode-gpios = <&gpio 37 GPIO_OPEN_DRAIN>;
};
phy5: ethernet-phy@6 {
reg = <6>;
coma-mode-gpios = <&gpio 37 GPIO_ACTIVE_HIGH>;
coma-mode-gpios = <&gpio 37 GPIO_OPEN_DRAIN>;
};
phy6: ethernet-phy@7 {
reg = <7>;
coma-mode-gpios = <&gpio 37 GPIO_ACTIVE_HIGH>;
coma-mode-gpios = <&gpio 37 GPIO_OPEN_DRAIN>;
};
phy7: ethernet-phy@8 {
reg = <8>;
coma-mode-gpios = <&gpio 37 GPIO_ACTIVE_HIGH>;
coma-mode-gpios = <&gpio 37 GPIO_OPEN_DRAIN>;
};
};

Просмотреть файл

@ -506,6 +506,8 @@
interrupts = <GIC_SPI 108 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&gcc GCC_BLSP1_UART2_APPS_CLK>, <&gcc GCC_BLSP1_AHB_CLK>;
clock-names = "core", "iface";
pinctrl-names = "default";
pinctrl-0 = <&blsp1_uart2_default>;
status = "disabled";
};
@ -581,6 +583,9 @@
interrupts = <GIC_SPI 113 IRQ_TYPE_NONE>;
clocks = <&gcc GCC_BLSP2_UART1_APPS_CLK>, <&gcc GCC_BLSP2_AHB_CLK>;
clock-names = "core", "iface";
pinctrl-names = "default", "sleep";
pinctrl-0 = <&blsp2_uart1_default>;
pinctrl-1 = <&blsp2_uart1_sleep>;
status = "disabled";
};
@ -599,6 +604,8 @@
interrupts = <GIC_SPI 116 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&gcc GCC_BLSP2_UART4_APPS_CLK>, <&gcc GCC_BLSP2_AHB_CLK>;
clock-names = "core", "iface";
pinctrl-names = "default";
pinctrl-0 = <&blsp2_uart4_default>;
status = "disabled";
};
@ -639,6 +646,9 @@
interrupts = <0 106 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&gcc GCC_BLSP2_QUP6_I2C_APPS_CLK>, <&gcc GCC_BLSP2_AHB_CLK>;
clock-names = "core", "iface";
pinctrl-names = "default", "sleep";
pinctrl-0 = <&blsp2_i2c6_default>;
pinctrl-1 = <&blsp2_i2c6_sleep>;
#address-cells = <1>;
#size-cells = <0>;
};
@ -1256,7 +1266,7 @@
};
};
blsp1_uart2_active: blsp1-uart2-active {
blsp1_uart2_default: blsp1-uart2-default {
rx {
pins = "gpio5";
function = "blsp_uart2";
@ -1272,7 +1282,7 @@
};
};
blsp2_uart1_active: blsp2-uart1-active {
blsp2_uart1_default: blsp2-uart1-default {
tx-rts {
pins = "gpio41", "gpio44";
function = "blsp_uart7";
@ -1295,7 +1305,7 @@
bias-pull-down;
};
blsp2_uart4_active: blsp2-uart4-active {
blsp2_uart4_default: blsp2-uart4-default {
tx-rts {
pins = "gpio53", "gpio56";
function = "blsp_uart10";
@ -1406,7 +1416,19 @@
bias-pull-up;
};
/* BLSP2_I2C6 info is missing - nobody uses it though? */
blsp2_i2c6_default: blsp2-i2c6-default {
pins = "gpio87", "gpio88";
function = "blsp_i2c12";
drive-strength = <2>;
bias-disable;
};
blsp2_i2c6_sleep: blsp2-i2c6-sleep {
pins = "gpio87", "gpio88";
function = "blsp_i2c12";
drive-strength = <2>;
bias-pull-up;
};
spi8_default: spi8_default {
mosi {

Просмотреть файл

@ -1124,7 +1124,7 @@
clocks = <&pmc PMC_TYPE_PERIPHERAL 55>, <&pmc PMC_TYPE_GCK 55>;
clock-names = "pclk", "gclk";
assigned-clocks = <&pmc PMC_TYPE_CORE PMC_I2S1_MUX>;
assigned-parrents = <&pmc PMC_TYPE_GCK 55>;
assigned-clock-parents = <&pmc PMC_TYPE_GCK 55>;
status = "disabled";
};

Просмотреть файл

@ -169,7 +169,7 @@
flash@0 {
#address-cells = <1>;
#size-cells = <1>;
compatible = "mxicy,mx25l1606e", "winbond,w25q128";
compatible = "mxicy,mx25l1606e", "jedec,spi-nor";
reg = <0>;
spi-max-frequency = <40000000>;
};

Просмотреть файл

@ -311,7 +311,7 @@ void __init rockchip_suspend_init(void)
&match);
if (!match) {
pr_err("Failed to find PMU node\n");
return;
goto out_put;
}
pm_data = (struct rockchip_pm_data *) match->data;
@ -320,9 +320,12 @@ void __init rockchip_suspend_init(void)
if (ret) {
pr_err("%s: matches init error %d\n", __func__, ret);
return;
goto out_put;
}
}
suspend_set_ops(pm_data->ops);
out_put:
of_node_put(np);
}

Просмотреть файл

@ -9,6 +9,14 @@
/delete-node/ cpu@3;
};
timer {
compatible = "arm,armv8-timer";
interrupts = <GIC_PPI 13 (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_LOW)>,
<GIC_PPI 14 (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_LOW)>,
<GIC_PPI 11 (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_LOW)>,
<GIC_PPI 10 (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_LOW)>;
};
pmu {
compatible = "arm,cortex-a53-pmu";
interrupts = <GIC_SPI 9 IRQ_TYPE_LEVEL_HIGH>,

Просмотреть файл

@ -29,6 +29,8 @@
device_type = "cpu";
compatible = "brcm,brahma-b53";
reg = <0x0>;
enable-method = "spin-table";
cpu-release-addr = <0x0 0xfff8>;
next-level-cache = <&l2>;
};

Просмотреть файл

@ -224,9 +224,12 @@
little-endian;
};
efuse@1e80000 {
sfp: efuse@1e80000 {
compatible = "fsl,ls1028a-sfp";
reg = <0x0 0x1e80000 0x0 0x10000>;
clocks = <&clockgen QORIQ_CLK_PLATFORM_PLL
QORIQ_CLK_PLL_DIV(4)>;
clock-names = "sfp";
#address-cells = <1>;
#size-cells = <1>;

Просмотреть файл

@ -376,7 +376,8 @@ camera: &i2c7 {
<&cru ACLK_VIO>,
<&cru ACLK_GIC_PRE>,
<&cru PCLK_DDR>,
<&cru ACLK_HDCP>;
<&cru ACLK_HDCP>,
<&cru ACLK_VDU>;
assigned-clock-rates =
<600000000>, <1600000000>,
<1000000000>,
@ -388,6 +389,7 @@ camera: &i2c7 {
<400000000>,
<200000000>,
<200000000>,
<400000000>,
<400000000>;
};

Просмотреть файл

@ -1462,7 +1462,8 @@
<&cru HCLK_PERILP1>, <&cru PCLK_PERILP1>,
<&cru ACLK_VIO>, <&cru ACLK_HDCP>,
<&cru ACLK_GIC_PRE>,
<&cru PCLK_DDR>;
<&cru PCLK_DDR>,
<&cru ACLK_VDU>;
assigned-clock-rates =
<594000000>, <800000000>,
<1000000000>,
@ -1473,7 +1474,8 @@
<100000000>, <50000000>,
<400000000>, <400000000>,
<200000000>,
<200000000>;
<200000000>,
<400000000>;
};
grf: syscon@ff770000 {

Просмотреть файл

@ -687,6 +687,7 @@
};
&usb_host0_xhci {
dr_mode = "host";
status = "okay";
};

Просмотреть файл

@ -133,7 +133,7 @@
assigned-clocks = <&cru SCLK_GMAC1_RX_TX>, <&cru SCLK_GMAC1_RGMII_SPEED>, <&cru SCLK_GMAC1>;
assigned-clock-parents = <&cru SCLK_GMAC1_RGMII_SPEED>, <&cru SCLK_GMAC1>, <&gmac1_clkin>;
clock_in_out = "input";
phy-mode = "rgmii-id";
phy-mode = "rgmii";
phy-supply = <&vcc_3v3>;
pinctrl-names = "default";
pinctrl-0 = <&gmac1m1_miim

Просмотреть файл

@ -4,21 +4,6 @@
#define __ASM_CSKY_TLB_H
#include <asm/cacheflush.h>
#define tlb_start_vma(tlb, vma) \
do { \
if (!(tlb)->fullmm) \
flush_cache_range(vma, (vma)->vm_start, (vma)->vm_end); \
} while (0)
#define tlb_end_vma(tlb, vma) \
do { \
if (!(tlb)->fullmm) \
flush_tlb_range(vma, (vma)->vm_start, (vma)->vm_end); \
} while (0)
#define tlb_flush(tlb) flush_tlb_mm((tlb)->mm)
#include <asm-generic/tlb.h>
#endif /* __ASM_CSKY_TLB_H */

Просмотреть файл

@ -108,6 +108,7 @@ config LOONGARCH
select TRACE_IRQFLAGS_SUPPORT
select USE_PERCPU_NUMA_NODE_ID
select ZONE_DMA32
select MMU_GATHER_MERGE_VMAS if MMU
config 32BIT
bool

Просмотреть файл

@ -137,16 +137,6 @@ static inline void invtlb_all(u32 op, u32 info, u64 addr)
);
}
/*
* LoongArch doesn't need any special per-pte or per-vma handling, except
* we need to flush cache for area to be unmapped.
*/
#define tlb_start_vma(tlb, vma) \
do { \
if (!(tlb)->fullmm) \
flush_cache_range(vma, vma->vm_start, vma->vm_end); \
} while (0)
#define tlb_end_vma(tlb, vma) do { } while (0)
#define __tlb_remove_tlb_entry(tlb, ptep, address) do { } while (0)
static void tlb_flush(struct mmu_gather *tlb);

Просмотреть файл

@ -256,6 +256,7 @@ config PPC
select IRQ_FORCED_THREADING
select MMU_GATHER_PAGE_SIZE
select MMU_GATHER_RCU_TABLE_FREE
select MMU_GATHER_MERGE_VMAS
select MODULES_USE_ELF_RELA
select NEED_DMA_MAP_STATE if PPC64 || NOT_COHERENT_CACHE
select NEED_PER_CPU_EMBED_FIRST_CHUNK if PPC64

Просмотреть файл

@ -19,8 +19,6 @@
#include <linux/pagemap.h>
#define tlb_start_vma(tlb, vma) do { } while (0)
#define tlb_end_vma(tlb, vma) do { } while (0)
#define __tlb_remove_tlb_entry __tlb_remove_tlb_entry
#define tlb_flush tlb_flush

Просмотреть файл

@ -50,6 +50,7 @@
riscv,isa = "rv64imafdc";
clocks = <&clkcfg CLK_CPU>;
tlb-split;
next-level-cache = <&cctrllr>;
status = "okay";
cpu1_intc: interrupt-controller {
@ -77,6 +78,7 @@
riscv,isa = "rv64imafdc";
clocks = <&clkcfg CLK_CPU>;
tlb-split;
next-level-cache = <&cctrllr>;
status = "okay";
cpu2_intc: interrupt-controller {
@ -104,6 +106,7 @@
riscv,isa = "rv64imafdc";
clocks = <&clkcfg CLK_CPU>;
tlb-split;
next-level-cache = <&cctrllr>;
status = "okay";
cpu3_intc: interrupt-controller {
@ -131,6 +134,7 @@
riscv,isa = "rv64imafdc";
clocks = <&clkcfg CLK_CPU>;
tlb-split;
next-level-cache = <&cctrllr>;
status = "okay";
cpu4_intc: interrupt-controller {
#interrupt-cells = <1>;

Просмотреть файл

@ -111,6 +111,7 @@ void __init_or_module sifive_errata_patch_func(struct alt_entry *begin,
cpu_apply_errata |= tmp;
}
}
if (cpu_apply_errata != cpu_req_errata)
if (stage != RISCV_ALTERNATIVES_MODULE &&
cpu_apply_errata != cpu_req_errata)
warn_miss_errata(cpu_req_errata - cpu_apply_errata);
}

Просмотреть файл

@ -175,7 +175,7 @@ static inline pud_t pfn_pud(unsigned long pfn, pgprot_t prot)
static inline unsigned long _pud_pfn(pud_t pud)
{
return pud_val(pud) >> _PAGE_PFN_SHIFT;
return __page_val_to_pfn(pud_val(pud));
}
static inline pmd_t *pud_pgtable(pud_t pud)
@ -278,13 +278,13 @@ static inline p4d_t pfn_p4d(unsigned long pfn, pgprot_t prot)
static inline unsigned long _p4d_pfn(p4d_t p4d)
{
return p4d_val(p4d) >> _PAGE_PFN_SHIFT;
return __page_val_to_pfn(p4d_val(p4d));
}
static inline pud_t *p4d_pgtable(p4d_t p4d)
{
if (pgtable_l4_enabled)
return (pud_t *)pfn_to_virt(p4d_val(p4d) >> _PAGE_PFN_SHIFT);
return (pud_t *)pfn_to_virt(__page_val_to_pfn(p4d_val(p4d)));
return (pud_t *)pud_pgtable((pud_t) { p4d_val(p4d) });
}
@ -292,7 +292,7 @@ static inline pud_t *p4d_pgtable(p4d_t p4d)
static inline struct page *p4d_page(p4d_t p4d)
{
return pfn_to_page(p4d_val(p4d) >> _PAGE_PFN_SHIFT);
return pfn_to_page(__page_val_to_pfn(p4d_val(p4d)));
}
#define pud_index(addr) (((addr) >> PUD_SHIFT) & (PTRS_PER_PUD - 1))
@ -347,7 +347,7 @@ static inline void pgd_clear(pgd_t *pgd)
static inline p4d_t *pgd_pgtable(pgd_t pgd)
{
if (pgtable_l5_enabled)
return (p4d_t *)pfn_to_virt(pgd_val(pgd) >> _PAGE_PFN_SHIFT);
return (p4d_t *)pfn_to_virt(__page_val_to_pfn(pgd_val(pgd)));
return (p4d_t *)p4d_pgtable((p4d_t) { pgd_val(pgd) });
}
@ -355,7 +355,7 @@ static inline p4d_t *pgd_pgtable(pgd_t pgd)
static inline struct page *pgd_page(pgd_t pgd)
{
return pfn_to_page(pgd_val(pgd) >> _PAGE_PFN_SHIFT);
return pfn_to_page(__page_val_to_pfn(pgd_val(pgd)));
}
#define pgd_page(pgd) pgd_page(pgd)

Просмотреть файл

@ -261,7 +261,7 @@ static inline pgd_t pfn_pgd(unsigned long pfn, pgprot_t prot)
static inline unsigned long _pgd_pfn(pgd_t pgd)
{
return pgd_val(pgd) >> _PAGE_PFN_SHIFT;
return __page_val_to_pfn(pgd_val(pgd));
}
static inline struct page *pmd_page(pmd_t pmd)
@ -590,14 +590,14 @@ static inline pmd_t pmd_mkinvalid(pmd_t pmd)
return __pmd(pmd_val(pmd) & ~(_PAGE_PRESENT|_PAGE_PROT_NONE));
}
#define __pmd_to_phys(pmd) (pmd_val(pmd) >> _PAGE_PFN_SHIFT << PAGE_SHIFT)
#define __pmd_to_phys(pmd) (__page_val_to_pfn(pmd_val(pmd)) << PAGE_SHIFT)
static inline unsigned long pmd_pfn(pmd_t pmd)
{
return ((__pmd_to_phys(pmd) & PMD_MASK) >> PAGE_SHIFT);
}
#define __pud_to_phys(pud) (pud_val(pud) >> _PAGE_PFN_SHIFT << PAGE_SHIFT)
#define __pud_to_phys(pud) (__page_val_to_pfn(pud_val(pud)) << PAGE_SHIFT)
static inline unsigned long pud_pfn(pud_t pud)
{

Просмотреть файл

@ -54,7 +54,7 @@ static inline unsigned long gstage_pte_index(gpa_t addr, u32 level)
static inline unsigned long gstage_pte_page_vaddr(pte_t pte)
{
return (unsigned long)pfn_to_virt(pte_val(pte) >> _PAGE_PFN_SHIFT);
return (unsigned long)pfn_to_virt(__page_val_to_pfn(pte_val(pte)));
}
static int gstage_page_size_to_level(unsigned long page_size, u32 *out_level)

Просмотреть файл

@ -781,9 +781,11 @@ static void kvm_riscv_check_vcpu_requests(struct kvm_vcpu *vcpu)
if (kvm_request_pending(vcpu)) {
if (kvm_check_request(KVM_REQ_SLEEP, vcpu)) {
kvm_vcpu_srcu_read_unlock(vcpu);
rcuwait_wait_event(wait,
(!vcpu->arch.power_off) && (!vcpu->arch.pause),
TASK_INTERRUPTIBLE);
kvm_vcpu_srcu_read_lock(vcpu);
if (vcpu->arch.power_off || vcpu->arch.pause) {
/*

Просмотреть файл

@ -204,6 +204,7 @@ config S390
select IOMMU_SUPPORT if PCI
select MMU_GATHER_NO_GATHER
select MMU_GATHER_RCU_TABLE_FREE
select MMU_GATHER_MERGE_VMAS
select MODULES_USE_ELF_RELA
select NEED_DMA_MAP_STATE if PCI
select NEED_SG_DMA_LENGTH if PCI

Просмотреть файл

@ -82,7 +82,7 @@ endif
ifdef CONFIG_EXPOLINE
ifdef CONFIG_EXPOLINE_EXTERN
KBUILD_LDFLAGS_MODULE += arch/s390/lib/expoline.o
KBUILD_LDFLAGS_MODULE += arch/s390/lib/expoline/expoline.o
CC_FLAGS_EXPOLINE := -mindirect-branch=thunk-extern
CC_FLAGS_EXPOLINE += -mfunction-return=thunk-extern
else
@ -163,6 +163,12 @@ vdso_prepare: prepare0
$(Q)$(MAKE) $(build)=arch/s390/kernel/vdso64 include/generated/vdso64-offsets.h
$(if $(CONFIG_COMPAT),$(Q)$(MAKE) \
$(build)=arch/s390/kernel/vdso32 include/generated/vdso32-offsets.h)
ifdef CONFIG_EXPOLINE_EXTERN
modules_prepare: expoline_prepare
expoline_prepare:
$(Q)$(MAKE) $(build)=arch/s390/lib/expoline arch/s390/lib/expoline/expoline.o
endif
endif
# Don't use tabs in echo arguments

Просмотреть файл

@ -2,8 +2,6 @@
#ifndef _ASM_S390_NOSPEC_ASM_H
#define _ASM_S390_NOSPEC_ASM_H
#include <asm/alternative-asm.h>
#include <asm/asm-offsets.h>
#include <asm/dwarf.h>
#ifdef __ASSEMBLY__

Просмотреть файл

@ -27,9 +27,6 @@ static inline void tlb_flush(struct mmu_gather *tlb);
static inline bool __tlb_remove_page_size(struct mmu_gather *tlb,
struct page *page, int page_size);
#define tlb_start_vma(tlb, vma) do { } while (0)
#define tlb_end_vma(tlb, vma) do { } while (0)
#define tlb_flush tlb_flush
#define pte_free_tlb pte_free_tlb
#define pmd_free_tlb pmd_free_tlb

Просмотреть файл

@ -7,7 +7,6 @@ lib-y += delay.o string.o uaccess.o find.o spinlock.o
obj-y += mem.o xor.o
lib-$(CONFIG_KPROBES) += probes.o
lib-$(CONFIG_UPROBES) += probes.o
obj-$(CONFIG_EXPOLINE_EXTERN) += expoline.o
obj-$(CONFIG_S390_KPROBES_SANITY_TEST) += test_kprobes_s390.o
test_kprobes_s390-objs += test_kprobes_asm.o test_kprobes.o
@ -22,3 +21,5 @@ obj-$(CONFIG_S390_MODULES_SANITY_TEST) += test_modules.o
obj-$(CONFIG_S390_MODULES_SANITY_TEST_HELPERS) += test_modules_helpers.o
lib-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o
obj-$(CONFIG_EXPOLINE_EXTERN) += expoline/

Просмотреть файл

@ -0,0 +1,3 @@
# SPDX-License-Identifier: GPL-2.0
obj-y += expoline.o

Просмотреть файл

Просмотреть файл

@ -67,6 +67,8 @@ config SPARC64
select HAVE_KRETPROBES
select HAVE_KPROBES
select MMU_GATHER_RCU_TABLE_FREE if SMP
select MMU_GATHER_MERGE_VMAS
select MMU_GATHER_NO_FLUSH_CACHE
select HAVE_ARCH_TRANSPARENT_HUGEPAGE
select HAVE_DYNAMIC_FTRACE
select HAVE_FTRACE_MCOUNT_RECORD

Просмотреть файл

@ -22,8 +22,6 @@ void smp_flush_tlb_mm(struct mm_struct *mm);
void __flush_tlb_pending(unsigned long, unsigned long, unsigned long *);
void flush_tlb_pending(void);
#define tlb_start_vma(tlb, vma) do { } while (0)
#define tlb_end_vma(tlb, vma) do { } while (0)
#define tlb_flush(tlb) flush_tlb_pending()
/*

Просмотреть файл

@ -432,6 +432,10 @@ void apply_retpolines(s32 *start, s32 *end)
{
}
void apply_returns(s32 *start, s32 *end)
{
}
void apply_alternatives(struct alt_instr *start, struct alt_instr *end)
{
}

Просмотреть файл

@ -245,6 +245,7 @@ config X86
select HAVE_PERF_REGS
select HAVE_PERF_USER_STACK_DUMP
select MMU_GATHER_RCU_TABLE_FREE if PARAVIRT
select MMU_GATHER_MERGE_VMAS
select HAVE_POSIX_CPU_TIMERS_TASK_WORK
select HAVE_REGS_AND_STACK_ACCESS_API
select HAVE_RELIABLE_STACKTRACE if UNWINDER_ORC || STACK_VALIDATION

Просмотреть файл

@ -727,7 +727,6 @@ native_irq_return_ldt:
pushq %rdi /* Stash user RDI */
swapgs /* to kernel GS */
SWITCH_TO_KERNEL_CR3 scratch_reg=%rdi /* to kernel CR3 */
UNTRAIN_RET
movq PER_CPU_VAR(espfix_waddr), %rdi
movq %rax, (0*8)(%rdi) /* user RAX */

Просмотреть файл

@ -2,9 +2,6 @@
#ifndef _ASM_X86_TLB_H
#define _ASM_X86_TLB_H
#define tlb_start_vma(tlb, vma) do { } while (0)
#define tlb_end_vma(tlb, vma) do { } while (0)
#define tlb_flush tlb_flush
static inline void tlb_flush(struct mmu_gather *tlb);

Просмотреть файл

@ -16,6 +16,12 @@ bool cpc_supported_by_cpu(void)
switch (boot_cpu_data.x86_vendor) {
case X86_VENDOR_AMD:
case X86_VENDOR_HYGON:
if (boot_cpu_data.x86 == 0x19 && ((boot_cpu_data.x86_model <= 0x0f) ||
(boot_cpu_data.x86_model >= 0x20 && boot_cpu_data.x86_model <= 0x2f)))
return true;
else if (boot_cpu_data.x86 == 0x17 &&
boot_cpu_data.x86_model >= 0x70 && boot_cpu_data.x86_model <= 0x7f)
return true;
return boot_cpu_has(X86_FEATURE_CPPC);
}
return false;

Просмотреть файл

@ -793,7 +793,7 @@ enum retbleed_mitigation_cmd {
RETBLEED_CMD_IBPB,
};
const char * const retbleed_strings[] = {
static const char * const retbleed_strings[] = {
[RETBLEED_MITIGATION_NONE] = "Vulnerable",
[RETBLEED_MITIGATION_UNRET] = "Mitigation: untrained return thunk",
[RETBLEED_MITIGATION_IBPB] = "Mitigation: IBPB",
@ -1181,7 +1181,7 @@ spectre_v2_user_select_mitigation(void)
if (retbleed_mitigation == RETBLEED_MITIGATION_UNRET) {
if (mode != SPECTRE_V2_USER_STRICT &&
mode != SPECTRE_V2_USER_STRICT_PREFERRED)
pr_info("Selecting STIBP always-on mode to complement retbleed mitigation'\n");
pr_info("Selecting STIBP always-on mode to complement retbleed mitigation\n");
mode = SPECTRE_V2_USER_STRICT_PREFERRED;
}

Просмотреть файл

@ -23,6 +23,7 @@
#include <asm/cpufeatures.h>
#include <asm/percpu.h>
#include <asm/nops.h>
#include <asm/nospec-branch.h>
#include <asm/bootparam.h>
#include <asm/export.h>
#include <asm/pgtable_32.h>

Просмотреть файл

@ -189,9 +189,6 @@
#define X8(x...) X4(x), X4(x)
#define X16(x...) X8(x), X8(x)
#define NR_FASTOP (ilog2(sizeof(ulong)) + 1)
#define FASTOP_SIZE (8 * (1 + HAS_KERNEL_IBT))
struct opcode {
u64 flags;
u8 intercept;
@ -306,9 +303,15 @@ static void invalidate_registers(struct x86_emulate_ctxt *ctxt)
* Moreover, they are all exactly FASTOP_SIZE bytes long, so functions for
* different operand sizes can be reached by calculation, rather than a jump
* table (which would be bigger than the code).
*
* The 16 byte alignment, considering 5 bytes for the RET thunk, 3 for ENDBR
* and 1 for the straight line speculation INT3, leaves 7 bytes for the
* body of the function. Currently none is larger than 4.
*/
static int fastop(struct x86_emulate_ctxt *ctxt, fastop_t fop);
#define FASTOP_SIZE 16
#define __FOP_FUNC(name) \
".align " __stringify(FASTOP_SIZE) " \n\t" \
".type " name ", @function \n\t" \
@ -442,11 +445,7 @@ static int fastop(struct x86_emulate_ctxt *ctxt, fastop_t fop);
* RET | JMP __x86_return_thunk [1,5 bytes; CONFIG_RETHUNK]
* INT3 [1 byte; CONFIG_SLS]
*/
#define RET_LENGTH (1 + (4 * IS_ENABLED(CONFIG_RETHUNK)) + \
IS_ENABLED(CONFIG_SLS))
#define SETCC_LENGTH (ENDBR_INSN_SIZE + 3 + RET_LENGTH)
#define SETCC_ALIGN (4 << ((SETCC_LENGTH > 4) & 1) << ((SETCC_LENGTH > 8) & 1))
static_assert(SETCC_LENGTH <= SETCC_ALIGN);
#define SETCC_ALIGN 16
#define FOP_SETCC(op) \
".align " __stringify(SETCC_ALIGN) " \n\t" \

Просмотреть файл

@ -2278,7 +2278,6 @@ static void prepare_vmcs02_early(struct vcpu_vmx *vmx, struct loaded_vmcs *vmcs0
SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY |
SECONDARY_EXEC_APIC_REGISTER_VIRT |
SECONDARY_EXEC_ENABLE_VMFUNC |
SECONDARY_EXEC_TSC_SCALING |
SECONDARY_EXEC_DESC);
if (nested_cpu_has(vmcs12,

Просмотреть файл

@ -298,7 +298,7 @@ const struct _kvm_stats_desc kvm_vcpu_stats_desc[] = {
STATS_DESC_COUNTER(VCPU, directed_yield_successful),
STATS_DESC_COUNTER(VCPU, preemption_reported),
STATS_DESC_COUNTER(VCPU, preemption_other),
STATS_DESC_ICOUNTER(VCPU, guest_mode)
STATS_DESC_IBOOLEAN(VCPU, guest_mode)
};
const struct kvm_stats_header kvm_vcpu_stats_header = {
@ -9143,15 +9143,17 @@ static int kvm_pv_clock_pairing(struct kvm_vcpu *vcpu, gpa_t paddr,
*/
static void kvm_pv_kick_cpu_op(struct kvm *kvm, int apicid)
{
struct kvm_lapic_irq lapic_irq;
/*
* All other fields are unused for APIC_DM_REMRD, but may be consumed by
* common code, e.g. for tracing. Defer initialization to the compiler.
*/
struct kvm_lapic_irq lapic_irq = {
.delivery_mode = APIC_DM_REMRD,
.dest_mode = APIC_DEST_PHYSICAL,
.shorthand = APIC_DEST_NOSHORT,
.dest_id = apicid,
};
lapic_irq.shorthand = APIC_DEST_NOSHORT;
lapic_irq.dest_mode = APIC_DEST_PHYSICAL;
lapic_irq.level = 0;
lapic_irq.dest_id = apicid;
lapic_irq.msi_redir_hint = false;
lapic_irq.delivery_mode = APIC_DM_REMRD;
kvm_irq_delivery_to_apic(kvm, NULL, &lapic_irq, NULL);
}

Просмотреть файл

@ -77,10 +77,20 @@ static uint8_t __pte2cachemode_tbl[8] = {
[__pte2cm_idx(_PAGE_PWT | _PAGE_PCD | _PAGE_PAT)] = _PAGE_CACHE_MODE_UC,
};
/* Check that the write-protect PAT entry is set for write-protect */
/*
* Check that the write-protect PAT entry is set for write-protect.
* To do this without making assumptions how PAT has been set up (Xen has
* another layout than the kernel), translate the _PAGE_CACHE_MODE_WP cache
* mode via the __cachemode2pte_tbl[] into protection bits (those protection
* bits will select a cache mode of WP or better), and then translate the
* protection bits back into the cache mode using __pte2cm_idx() and the
* __pte2cachemode_tbl[] array. This will return the really used cache mode.
*/
bool x86_has_pat_wp(void)
{
return __pte2cachemode_tbl[_PAGE_CACHE_MODE_WP] == _PAGE_CACHE_MODE_WP;
uint16_t prot = __cachemode2pte_tbl[_PAGE_CACHE_MODE_WP];
return __pte2cachemode_tbl[__pte2cm_idx(prot)] == _PAGE_CACHE_MODE_WP;
}
enum page_cache_mode pgprot2cachemode(pgprot_t pgprot)

Просмотреть файл

@ -23,6 +23,7 @@
#include <linux/objtool.h>
#include <asm/page_types.h>
#include <asm/segment.h>
#include <asm/nospec-branch.h>
.text
.code64
@ -75,7 +76,9 @@ STACK_FRAME_NON_STANDARD __efi64_thunk
1: movq 0x20(%rsp), %rsp
pop %rbx
pop %rbp
RET
ANNOTATE_UNRET_SAFE
ret
int3
.code32
2: pushl $__KERNEL_CS

Просмотреть файл

@ -345,6 +345,7 @@ void __blk_queue_split(struct request_queue *q, struct bio **bio,
/* there isn't chance to merge the splitted bio */
split->bi_opf |= REQ_NOMERGE;
blkcg_bio_issue_init(split);
bio_chain(split, *bio);
trace_block_split(split, (*bio)->bi_iter.bi_sector);
submit_bio_noacct(*bio);

Просмотреть файл

@ -73,7 +73,7 @@ module_param(device_id_scheme, bool, 0444);
static int only_lcd = -1;
module_param(only_lcd, int, 0444);
static bool has_backlight;
static bool may_report_brightness_keys;
static int register_count;
static DEFINE_MUTEX(register_count_mutex);
static DEFINE_MUTEX(video_list_lock);
@ -1224,7 +1224,7 @@ acpi_video_bus_get_one_device(struct acpi_device *device,
acpi_video_device_find_cap(data);
if (data->cap._BCM && data->cap._BCL)
has_backlight = true;
may_report_brightness_keys = true;
mutex_lock(&video->device_list_lock);
list_add_tail(&data->entry, &video->video_device_list);
@ -1693,6 +1693,9 @@ static void acpi_video_device_notify(acpi_handle handle, u32 event, void *data)
break;
}
if (keycode)
may_report_brightness_keys = true;
acpi_notifier_call_chain(device, event, 0);
if (keycode && (report_key_events & REPORT_BRIGHTNESS_KEY_EVENTS)) {
@ -2253,7 +2256,7 @@ void acpi_video_unregister(void)
if (register_count) {
acpi_bus_unregister_driver(&acpi_video_bus);
register_count = 0;
has_backlight = false;
may_report_brightness_keys = false;
}
mutex_unlock(&register_count_mutex);
}
@ -2275,7 +2278,7 @@ void acpi_video_unregister_backlight(void)
bool acpi_video_handles_brightness_key_presses(void)
{
return has_backlight &&
return may_report_brightness_keys &&
(report_key_events & REPORT_BRIGHTNESS_KEY_EVENTS);
}
EXPORT_SYMBOL(acpi_video_handles_brightness_key_presses);

Просмотреть файл

@ -1174,7 +1174,7 @@ static void __cold entropy_timer(struct timer_list *timer)
*/
static void __cold try_to_generate_entropy(void)
{
enum { NUM_TRIAL_SAMPLES = 8192, MAX_SAMPLES_PER_BIT = 32 };
enum { NUM_TRIAL_SAMPLES = 8192, MAX_SAMPLES_PER_BIT = HZ / 30 };
struct entropy_timer_state stack;
unsigned int i, num_different = 0;
unsigned long last = random_get_entropy();

Просмотреть файл

@ -439,9 +439,13 @@ static int mtk_cpu_dvfs_info_init(struct mtk_cpu_dvfs_info *info, int cpu)
/* Both presence and absence of sram regulator are valid cases. */
info->sram_reg = regulator_get_optional(cpu_dev, "sram");
if (IS_ERR(info->sram_reg))
if (IS_ERR(info->sram_reg)) {
ret = PTR_ERR(info->sram_reg);
if (ret == -EPROBE_DEFER)
goto out_free_resources;
info->sram_reg = NULL;
else {
} else {
ret = regulator_enable(info->sram_reg);
if (ret) {
dev_warn(cpu_dev, "cpu%d: failed to enable vsram\n", cpu);

Просмотреть файл

@ -6,7 +6,7 @@
#include <linux/efi.h>
#include <linux/reboot.h>
static void (*orig_pm_power_off)(void);
static struct sys_off_handler *efi_sys_off_handler;
int efi_reboot_quirk_mode = -1;
@ -51,15 +51,11 @@ bool __weak efi_poweroff_required(void)
return false;
}
static void efi_power_off(void)
static int efi_power_off(struct sys_off_data *data)
{
efi.reset_system(EFI_RESET_SHUTDOWN, EFI_SUCCESS, 0, NULL);
/*
* The above call should not return, if it does fall back to
* the original power off method (typically ACPI poweroff).
*/
if (orig_pm_power_off)
orig_pm_power_off();
return NOTIFY_DONE;
}
static int __init efi_shutdown_init(void)
@ -68,8 +64,13 @@ static int __init efi_shutdown_init(void)
return -ENODEV;
if (efi_poweroff_required()) {
orig_pm_power_off = pm_power_off;
pm_power_off = efi_power_off;
/* SYS_OFF_PRIO_FIRMWARE + 1 so that it runs before acpi_power_off */
efi_sys_off_handler =
register_sys_off_handler(SYS_OFF_MODE_POWER_OFF,
SYS_OFF_PRIO_FIRMWARE + 1,
efi_power_off, NULL);
if (IS_ERR(efi_sys_off_handler))
return PTR_ERR(efi_sys_off_handler);
}
return 0;

Просмотреть файл

@ -991,28 +991,22 @@ static struct configfs_attribute *gpio_sim_device_config_attrs[] = {
};
struct gpio_sim_chip_name_ctx {
struct gpio_sim_device *dev;
struct fwnode_handle *swnode;
char *page;
};
static int gpio_sim_emit_chip_name(struct device *dev, void *data)
{
struct gpio_sim_chip_name_ctx *ctx = data;
struct fwnode_handle *swnode;
struct gpio_sim_bank *bank;
/* This would be the sysfs device exported in /sys/class/gpio. */
if (dev->class)
return 0;
swnode = dev_fwnode(dev);
list_for_each_entry(bank, &ctx->dev->bank_list, siblings) {
if (bank->swnode == swnode)
if (device_match_fwnode(dev, ctx->swnode))
return sprintf(ctx->page, "%s\n", dev_name(dev));
}
return -ENODATA;
return 0;
}
static ssize_t gpio_sim_bank_config_chip_name_show(struct config_item *item,
@ -1020,7 +1014,7 @@ static ssize_t gpio_sim_bank_config_chip_name_show(struct config_item *item,
{
struct gpio_sim_bank *bank = to_gpio_sim_bank(item);
struct gpio_sim_device *dev = gpio_sim_bank_get_device(bank);
struct gpio_sim_chip_name_ctx ctx = { dev, page };
struct gpio_sim_chip_name_ctx ctx = { bank->swnode, page };
int ret;
mutex_lock(&dev->lock);

Просмотреть файл

@ -421,6 +421,10 @@ out_free_lh:
* @work: the worker that implements software debouncing
* @sw_debounced: flag indicating if the software debouncer is active
* @level: the current debounced physical level of the line
* @hdesc: the Hardware Timestamp Engine (HTE) descriptor
* @raw_level: the line level at the time of event
* @total_discard_seq: the running counter of the discarded events
* @last_seqno: the last sequence number before debounce period expires
*/
struct line {
struct gpio_desc *desc;

Просмотреть файл

@ -256,7 +256,6 @@ config DRM_AMDGPU
select HWMON
select BACKLIGHT_CLASS_DEVICE
select INTERVAL_TREE
select DRM_BUDDY
help
Choose this option if you have a recent AMD Radeon graphics card.

Просмотреть файл

@ -30,15 +30,12 @@
#include <drm/ttm/ttm_resource.h>
#include <drm/ttm/ttm_range_manager.h>
#include "amdgpu_vram_mgr.h"
/* state back for walking over vram_mgr and gtt_mgr allocations */
struct amdgpu_res_cursor {
uint64_t start;
uint64_t size;
uint64_t remaining;
void *node;
uint32_t mem_type;
struct drm_mm_node *node;
};
/**
@ -55,41 +52,19 @@ static inline void amdgpu_res_first(struct ttm_resource *res,
uint64_t start, uint64_t size,
struct amdgpu_res_cursor *cur)
{
struct drm_buddy_block *block;
struct list_head *head, *next;
struct drm_mm_node *node;
if (!res)
goto fallback;
if (!res || res->mem_type == TTM_PL_SYSTEM) {
cur->start = start;
cur->size = size;
cur->remaining = size;
cur->node = NULL;
WARN_ON(res && start + size > res->num_pages << PAGE_SHIFT);
return;
}
BUG_ON(start + size > res->num_pages << PAGE_SHIFT);
cur->mem_type = res->mem_type;
switch (cur->mem_type) {
case TTM_PL_VRAM:
head = &to_amdgpu_vram_mgr_resource(res)->blocks;
block = list_first_entry_or_null(head,
struct drm_buddy_block,
link);
if (!block)
goto fallback;
while (start >= amdgpu_vram_mgr_block_size(block)) {
start -= amdgpu_vram_mgr_block_size(block);
next = block->link.next;
if (next != head)
block = list_entry(next, struct drm_buddy_block, link);
}
cur->start = amdgpu_vram_mgr_block_start(block) + start;
cur->size = min(amdgpu_vram_mgr_block_size(block) - start, size);
cur->remaining = size;
cur->node = block;
break;
case TTM_PL_TT:
node = to_ttm_range_mgr_node(res)->mm_nodes;
while (start >= node->size << PAGE_SHIFT)
start -= node++->size << PAGE_SHIFT;
@ -98,20 +73,6 @@ static inline void amdgpu_res_first(struct ttm_resource *res,
cur->size = min((node->size << PAGE_SHIFT) - start, size);
cur->remaining = size;
cur->node = node;
break;
default:
goto fallback;
}
return;
fallback:
cur->start = start;
cur->size = size;
cur->remaining = size;
cur->node = NULL;
WARN_ON(res && start + size > res->num_pages << PAGE_SHIFT);
return;
}
/**
@ -124,9 +85,7 @@ fallback:
*/
static inline void amdgpu_res_next(struct amdgpu_res_cursor *cur, uint64_t size)
{
struct drm_buddy_block *block;
struct drm_mm_node *node;
struct list_head *next;
struct drm_mm_node *node = cur->node;
BUG_ON(size > cur->remaining);
@ -140,27 +99,9 @@ static inline void amdgpu_res_next(struct amdgpu_res_cursor *cur, uint64_t size)
return;
}
switch (cur->mem_type) {
case TTM_PL_VRAM:
block = cur->node;
next = block->link.next;
block = list_entry(next, struct drm_buddy_block, link);
cur->node = block;
cur->start = amdgpu_vram_mgr_block_start(block);
cur->size = min(amdgpu_vram_mgr_block_size(block), cur->remaining);
break;
case TTM_PL_TT:
node = cur->node;
cur->node = ++node;
cur->start = node->start << PAGE_SHIFT;
cur->size = min(node->size << PAGE_SHIFT, cur->remaining);
break;
default:
return;
}
}
#endif

Просмотреть файл

@ -26,7 +26,6 @@
#include <linux/dma-direction.h>
#include <drm/gpu_scheduler.h>
#include "amdgpu_vram_mgr.h"
#include "amdgpu.h"
#define AMDGPU_PL_GDS (TTM_PL_PRIV + 0)
@ -39,6 +38,15 @@
#define AMDGPU_POISON 0xd0bed0be
struct amdgpu_vram_mgr {
struct ttm_resource_manager manager;
struct drm_mm mm;
spinlock_t lock;
struct list_head reservations_pending;
struct list_head reserved_pages;
atomic64_t vis_usage;
};
struct amdgpu_gtt_mgr {
struct ttm_resource_manager manager;
struct drm_mm mm;

Просмотреть файл

@ -32,10 +32,8 @@
#include "atom.h"
struct amdgpu_vram_reservation {
u64 start;
u64 size;
struct list_head allocated;
struct list_head blocks;
struct list_head node;
struct drm_mm_node mm_node;
};
static inline struct amdgpu_vram_mgr *
@ -188,18 +186,18 @@ const struct attribute_group amdgpu_vram_mgr_attr_group = {
};
/**
* amdgpu_vram_mgr_vis_size - Calculate visible block size
* amdgpu_vram_mgr_vis_size - Calculate visible node size
*
* @adev: amdgpu_device pointer
* @block: DRM BUDDY block structure
* @node: MM node structure
*
* Calculate how many bytes of the DRM BUDDY block are inside visible VRAM
* Calculate how many bytes of the MM node are inside visible VRAM
*/
static u64 amdgpu_vram_mgr_vis_size(struct amdgpu_device *adev,
struct drm_buddy_block *block)
struct drm_mm_node *node)
{
u64 start = amdgpu_vram_mgr_block_start(block);
u64 end = start + amdgpu_vram_mgr_block_size(block);
uint64_t start = node->start << PAGE_SHIFT;
uint64_t end = (node->size + node->start) << PAGE_SHIFT;
if (start >= adev->gmc.visible_vram_size)
return 0;
@ -220,9 +218,9 @@ u64 amdgpu_vram_mgr_bo_visible_size(struct amdgpu_bo *bo)
{
struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
struct ttm_resource *res = bo->tbo.resource;
struct amdgpu_vram_mgr_resource *vres = to_amdgpu_vram_mgr_resource(res);
struct drm_buddy_block *block;
u64 usage = 0;
unsigned pages = res->num_pages;
struct drm_mm_node *mm;
u64 usage;
if (amdgpu_gmc_vram_full_visible(&adev->gmc))
return amdgpu_bo_size(bo);
@ -230,8 +228,9 @@ u64 amdgpu_vram_mgr_bo_visible_size(struct amdgpu_bo *bo)
if (res->start >= adev->gmc.visible_vram_size >> PAGE_SHIFT)
return 0;
list_for_each_entry(block, &vres->blocks, link)
usage += amdgpu_vram_mgr_vis_size(adev, block);
mm = &container_of(res, struct ttm_range_mgr_node, base)->mm_nodes[0];
for (usage = 0; pages; pages -= mm->size, mm++)
usage += amdgpu_vram_mgr_vis_size(adev, mm);
return usage;
}
@ -241,30 +240,23 @@ static void amdgpu_vram_mgr_do_reserve(struct ttm_resource_manager *man)
{
struct amdgpu_vram_mgr *mgr = to_vram_mgr(man);
struct amdgpu_device *adev = to_amdgpu_device(mgr);
struct drm_buddy *mm = &mgr->mm;
struct drm_mm *mm = &mgr->mm;
struct amdgpu_vram_reservation *rsv, *temp;
struct drm_buddy_block *block;
uint64_t vis_usage;
list_for_each_entry_safe(rsv, temp, &mgr->reservations_pending, blocks) {
if (drm_buddy_alloc_blocks(mm, rsv->start, rsv->start + rsv->size,
rsv->size, mm->chunk_size, &rsv->allocated,
DRM_BUDDY_RANGE_ALLOCATION))
continue;
block = amdgpu_vram_mgr_first_block(&rsv->allocated);
if (!block)
list_for_each_entry_safe(rsv, temp, &mgr->reservations_pending, node) {
if (drm_mm_reserve_node(mm, &rsv->mm_node))
continue;
dev_dbg(adev->dev, "Reservation 0x%llx - %lld, Succeeded\n",
rsv->start, rsv->size);
rsv->mm_node.start, rsv->mm_node.size);
vis_usage = amdgpu_vram_mgr_vis_size(adev, block);
vis_usage = amdgpu_vram_mgr_vis_size(adev, &rsv->mm_node);
atomic64_add(vis_usage, &mgr->vis_usage);
spin_lock(&man->bdev->lru_lock);
man->usage += rsv->size;
man->usage += rsv->mm_node.size << PAGE_SHIFT;
spin_unlock(&man->bdev->lru_lock);
list_move(&rsv->blocks, &mgr->reserved_pages);
list_move(&rsv->node, &mgr->reserved_pages);
}
}
@ -286,16 +278,14 @@ int amdgpu_vram_mgr_reserve_range(struct amdgpu_vram_mgr *mgr,
if (!rsv)
return -ENOMEM;
INIT_LIST_HEAD(&rsv->allocated);
INIT_LIST_HEAD(&rsv->blocks);
INIT_LIST_HEAD(&rsv->node);
rsv->mm_node.start = start >> PAGE_SHIFT;
rsv->mm_node.size = size >> PAGE_SHIFT;
rsv->start = start;
rsv->size = size;
mutex_lock(&mgr->lock);
list_add_tail(&rsv->blocks, &mgr->reservations_pending);
spin_lock(&mgr->lock);
list_add_tail(&rsv->node, &mgr->reservations_pending);
amdgpu_vram_mgr_do_reserve(&mgr->manager);
mutex_unlock(&mgr->lock);
spin_unlock(&mgr->lock);
return 0;
}
@ -317,19 +307,19 @@ int amdgpu_vram_mgr_query_page_status(struct amdgpu_vram_mgr *mgr,
struct amdgpu_vram_reservation *rsv;
int ret;
mutex_lock(&mgr->lock);
spin_lock(&mgr->lock);
list_for_each_entry(rsv, &mgr->reservations_pending, blocks) {
if (rsv->start <= start &&
(start < (rsv->start + rsv->size))) {
list_for_each_entry(rsv, &mgr->reservations_pending, node) {
if ((rsv->mm_node.start <= start) &&
(start < (rsv->mm_node.start + rsv->mm_node.size))) {
ret = -EBUSY;
goto out;
}
}
list_for_each_entry(rsv, &mgr->reserved_pages, blocks) {
if (rsv->start <= start &&
(start < (rsv->start + rsv->size))) {
list_for_each_entry(rsv, &mgr->reserved_pages, node) {
if ((rsv->mm_node.start <= start) &&
(start < (rsv->mm_node.start + rsv->mm_node.size))) {
ret = 0;
goto out;
}
@ -337,10 +327,32 @@ int amdgpu_vram_mgr_query_page_status(struct amdgpu_vram_mgr *mgr,
ret = -ENOENT;
out:
mutex_unlock(&mgr->lock);
spin_unlock(&mgr->lock);
return ret;
}
/**
* amdgpu_vram_mgr_virt_start - update virtual start address
*
* @mem: ttm_resource to update
* @node: just allocated node
*
* Calculate a virtual BO start address to easily check if everything is CPU
* accessible.
*/
static void amdgpu_vram_mgr_virt_start(struct ttm_resource *mem,
struct drm_mm_node *node)
{
unsigned long start;
start = node->start + node->size;
if (start > mem->num_pages)
start -= mem->num_pages;
else
start = 0;
mem->start = max(mem->start, start);
}
/**
* amdgpu_vram_mgr_new - allocate new ranges
*
@ -356,44 +368,46 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man,
const struct ttm_place *place,
struct ttm_resource **res)
{
u64 vis_usage = 0, max_bytes, cur_size, min_block_size;
unsigned long lpfn, num_nodes, pages_per_node, pages_left, pages;
struct amdgpu_vram_mgr *mgr = to_vram_mgr(man);
struct amdgpu_device *adev = to_amdgpu_device(mgr);
struct amdgpu_vram_mgr_resource *vres;
u64 size, remaining_size, lpfn, fpfn;
struct drm_buddy *mm = &mgr->mm;
struct drm_buddy_block *block;
unsigned long pages_per_block;
uint64_t vis_usage = 0, mem_bytes, max_bytes;
struct ttm_range_mgr_node *node;
struct drm_mm *mm = &mgr->mm;
enum drm_mm_insert_mode mode;
unsigned i;
int r;
lpfn = place->lpfn << PAGE_SHIFT;
lpfn = place->lpfn;
if (!lpfn)
lpfn = man->size;
fpfn = place->fpfn << PAGE_SHIFT;
lpfn = man->size >> PAGE_SHIFT;
max_bytes = adev->gmc.mc_vram_size;
if (tbo->type != ttm_bo_type_kernel)
max_bytes -= AMDGPU_VM_RESERVED_VRAM;
mem_bytes = tbo->base.size;
if (place->flags & TTM_PL_FLAG_CONTIGUOUS) {
pages_per_block = ~0ul;
pages_per_node = ~0ul;
num_nodes = 1;
} else {
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
pages_per_block = HPAGE_PMD_NR;
pages_per_node = HPAGE_PMD_NR;
#else
/* default to 2MB */
pages_per_block = 2UL << (20UL - PAGE_SHIFT);
pages_per_node = 2UL << (20UL - PAGE_SHIFT);
#endif
pages_per_block = max_t(uint32_t, pages_per_block,
pages_per_node = max_t(uint32_t, pages_per_node,
tbo->page_alignment);
num_nodes = DIV_ROUND_UP_ULL(PFN_UP(mem_bytes), pages_per_node);
}
vres = kzalloc(sizeof(*vres), GFP_KERNEL);
if (!vres)
node = kvmalloc(struct_size(node, mm_nodes, num_nodes),
GFP_KERNEL | __GFP_ZERO);
if (!node)
return -ENOMEM;
ttm_resource_init(tbo, place, &vres->base);
ttm_resource_init(tbo, place, &node->base);
/* bail out quickly if there's likely not enough VRAM for this BO */
if (ttm_resource_manager_usage(man) > max_bytes) {
@ -401,130 +415,66 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man,
goto error_fini;
}
INIT_LIST_HEAD(&vres->blocks);
mode = DRM_MM_INSERT_BEST;
if (place->flags & TTM_PL_FLAG_TOPDOWN)
vres->flags |= DRM_BUDDY_TOPDOWN_ALLOCATION;
mode = DRM_MM_INSERT_HIGH;
if (fpfn || lpfn != man->size)
/* Allocate blocks in desired range */
vres->flags |= DRM_BUDDY_RANGE_ALLOCATION;
pages_left = node->base.num_pages;
remaining_size = vres->base.num_pages << PAGE_SHIFT;
/* Limit maximum size to 2GB due to SG table limitations */
pages = min(pages_left, 2UL << (30 - PAGE_SHIFT));
mutex_lock(&mgr->lock);
while (remaining_size) {
if (tbo->page_alignment)
min_block_size = tbo->page_alignment << PAGE_SHIFT;
i = 0;
spin_lock(&mgr->lock);
while (pages_left) {
uint32_t alignment = tbo->page_alignment;
if (pages >= pages_per_node)
alignment = pages_per_node;
r = drm_mm_insert_node_in_range(mm, &node->mm_nodes[i], pages,
alignment, 0, place->fpfn,
lpfn, mode);
if (unlikely(r)) {
if (pages > pages_per_node) {
if (is_power_of_2(pages))
pages = pages / 2;
else
min_block_size = mgr->default_page_size;
BUG_ON(min_block_size < mm->chunk_size);
/* Limit maximum size to 2GiB due to SG table limitations */
size = min(remaining_size, 2ULL << 30);
if (size >= pages_per_block << PAGE_SHIFT)
min_block_size = pages_per_block << PAGE_SHIFT;
cur_size = size;
if (fpfn + size != place->lpfn << PAGE_SHIFT) {
/*
* Except for actual range allocation, modify the size and
* min_block_size conforming to continuous flag enablement
*/
if (place->flags & TTM_PL_FLAG_CONTIGUOUS) {
size = roundup_pow_of_two(size);
min_block_size = size;
/*
* Modify the size value if size is not
* aligned with min_block_size
*/
} else if (!IS_ALIGNED(size, min_block_size)) {
size = round_up(size, min_block_size);
pages = rounddown_pow_of_two(pages);
continue;
}
goto error_free;
}
r = drm_buddy_alloc_blocks(mm, fpfn,
lpfn,
size,
min_block_size,
&vres->blocks,
vres->flags);
if (unlikely(r))
goto error_free_blocks;
vis_usage += amdgpu_vram_mgr_vis_size(adev, &node->mm_nodes[i]);
amdgpu_vram_mgr_virt_start(&node->base, &node->mm_nodes[i]);
pages_left -= pages;
++i;
if (size > remaining_size)
remaining_size = 0;
else
remaining_size -= size;
if (pages > pages_left)
pages = pages_left;
}
mutex_unlock(&mgr->lock);
spin_unlock(&mgr->lock);
if (cur_size != size) {
struct drm_buddy_block *block;
struct list_head *trim_list;
u64 original_size;
LIST_HEAD(temp);
trim_list = &vres->blocks;
original_size = vres->base.num_pages << PAGE_SHIFT;
/*
* If size value is rounded up to min_block_size, trim the last
* block to the required size
*/
if (!list_is_singular(&vres->blocks)) {
block = list_last_entry(&vres->blocks, typeof(*block), link);
list_move_tail(&block->link, &temp);
trim_list = &temp;
/*
* Compute the original_size value by subtracting the
* last block size with (aligned size - original size)
*/
original_size = amdgpu_vram_mgr_block_size(block) - (size - cur_size);
}
mutex_lock(&mgr->lock);
drm_buddy_block_trim(mm,
original_size,
trim_list);
mutex_unlock(&mgr->lock);
if (!list_empty(&temp))
list_splice_tail(trim_list, &vres->blocks);
}
list_for_each_entry(block, &vres->blocks, link)
vis_usage += amdgpu_vram_mgr_vis_size(adev, block);
block = amdgpu_vram_mgr_first_block(&vres->blocks);
if (!block) {
r = -EINVAL;
goto error_fini;
}
vres->base.start = amdgpu_vram_mgr_block_start(block) >> PAGE_SHIFT;
if (amdgpu_is_vram_mgr_blocks_contiguous(&vres->blocks))
vres->base.placement |= TTM_PL_FLAG_CONTIGUOUS;
if (i == 1)
node->base.placement |= TTM_PL_FLAG_CONTIGUOUS;
if (adev->gmc.xgmi.connected_to_cpu)
vres->base.bus.caching = ttm_cached;
node->base.bus.caching = ttm_cached;
else
vres->base.bus.caching = ttm_write_combined;
node->base.bus.caching = ttm_write_combined;
atomic64_add(vis_usage, &mgr->vis_usage);
*res = &vres->base;
*res = &node->base;
return 0;
error_free_blocks:
drm_buddy_free_list(mm, &vres->blocks);
mutex_unlock(&mgr->lock);
error_free:
while (i--)
drm_mm_remove_node(&node->mm_nodes[i]);
spin_unlock(&mgr->lock);
error_fini:
ttm_resource_fini(man, &vres->base);
kfree(vres);
ttm_resource_fini(man, &node->base);
kvfree(node);
return r;
}
@ -540,26 +490,27 @@ error_fini:
static void amdgpu_vram_mgr_del(struct ttm_resource_manager *man,
struct ttm_resource *res)
{
struct amdgpu_vram_mgr_resource *vres = to_amdgpu_vram_mgr_resource(res);
struct ttm_range_mgr_node *node = to_ttm_range_mgr_node(res);
struct amdgpu_vram_mgr *mgr = to_vram_mgr(man);
struct amdgpu_device *adev = to_amdgpu_device(mgr);
struct drm_buddy *mm = &mgr->mm;
struct drm_buddy_block *block;
uint64_t vis_usage = 0;
unsigned i, pages;
mutex_lock(&mgr->lock);
list_for_each_entry(block, &vres->blocks, link)
vis_usage += amdgpu_vram_mgr_vis_size(adev, block);
spin_lock(&mgr->lock);
for (i = 0, pages = res->num_pages; pages;
pages -= node->mm_nodes[i].size, ++i) {
struct drm_mm_node *mm = &node->mm_nodes[i];
drm_mm_remove_node(mm);
vis_usage += amdgpu_vram_mgr_vis_size(adev, mm);
}
amdgpu_vram_mgr_do_reserve(man);
drm_buddy_free_list(mm, &vres->blocks);
mutex_unlock(&mgr->lock);
spin_unlock(&mgr->lock);
atomic64_sub(vis_usage, &mgr->vis_usage);
ttm_resource_fini(man, res);
kfree(vres);
kvfree(node);
}
/**
@ -591,7 +542,7 @@ int amdgpu_vram_mgr_alloc_sgt(struct amdgpu_device *adev,
if (!*sgt)
return -ENOMEM;
/* Determine the number of DRM_BUDDY blocks to export */
/* Determine the number of DRM_MM nodes to export */
amdgpu_res_first(res, offset, length, &cursor);
while (cursor.remaining) {
num_entries++;
@ -607,10 +558,10 @@ int amdgpu_vram_mgr_alloc_sgt(struct amdgpu_device *adev,
sg->length = 0;
/*
* Walk down DRM_BUDDY blocks to populate scatterlist nodes
* @note: Use iterator api to get first the DRM_BUDDY block
* Walk down DRM_MM nodes to populate scatterlist nodes
* @note: Use iterator api to get first the DRM_MM node
* and the number of bytes from it. Access the following
* DRM_BUDDY block(s) if more buffer needs to exported
* DRM_MM node(s) if more buffer needs to exported
*/
amdgpu_res_first(res, offset, length, &cursor);
for_each_sgtable_sg((*sgt), sg, i) {
@ -697,22 +648,13 @@ static void amdgpu_vram_mgr_debug(struct ttm_resource_manager *man,
struct drm_printer *printer)
{
struct amdgpu_vram_mgr *mgr = to_vram_mgr(man);
struct drm_buddy *mm = &mgr->mm;
struct drm_buddy_block *block;
drm_printf(printer, " vis usage:%llu\n",
amdgpu_vram_mgr_vis_usage(mgr));
mutex_lock(&mgr->lock);
drm_printf(printer, "default_page_size: %lluKiB\n",
mgr->default_page_size >> 10);
drm_buddy_print(mm, printer);
drm_printf(printer, "reserved:\n");
list_for_each_entry(block, &mgr->reserved_pages, link)
drm_buddy_block_print(mm, block, printer);
mutex_unlock(&mgr->lock);
spin_lock(&mgr->lock);
drm_mm_print(&mgr->mm, printer);
spin_unlock(&mgr->lock);
}
static const struct ttm_resource_manager_func amdgpu_vram_mgr_func = {
@ -732,21 +674,16 @@ int amdgpu_vram_mgr_init(struct amdgpu_device *adev)
{
struct amdgpu_vram_mgr *mgr = &adev->mman.vram_mgr;
struct ttm_resource_manager *man = &mgr->manager;
int err;
ttm_resource_manager_init(man, &adev->mman.bdev,
adev->gmc.real_vram_size);
man->func = &amdgpu_vram_mgr_func;
err = drm_buddy_init(&mgr->mm, man->size, PAGE_SIZE);
if (err)
return err;
mutex_init(&mgr->lock);
drm_mm_init(&mgr->mm, 0, man->size >> PAGE_SHIFT);
spin_lock_init(&mgr->lock);
INIT_LIST_HEAD(&mgr->reservations_pending);
INIT_LIST_HEAD(&mgr->reserved_pages);
mgr->default_page_size = PAGE_SIZE;
ttm_set_driver_manager(&adev->mman.bdev, TTM_PL_VRAM, &mgr->manager);
ttm_resource_manager_set_used(man, true);
@ -774,16 +711,16 @@ void amdgpu_vram_mgr_fini(struct amdgpu_device *adev)
if (ret)
return;
mutex_lock(&mgr->lock);
list_for_each_entry_safe(rsv, temp, &mgr->reservations_pending, blocks)
spin_lock(&mgr->lock);
list_for_each_entry_safe(rsv, temp, &mgr->reservations_pending, node)
kfree(rsv);
list_for_each_entry_safe(rsv, temp, &mgr->reserved_pages, blocks) {
drm_buddy_free_list(&mgr->mm, &rsv->blocks);
list_for_each_entry_safe(rsv, temp, &mgr->reserved_pages, node) {
drm_mm_remove_node(&rsv->mm_node);
kfree(rsv);
}
drm_buddy_fini(&mgr->mm);
mutex_unlock(&mgr->lock);
drm_mm_takedown(&mgr->mm);
spin_unlock(&mgr->lock);
ttm_resource_manager_cleanup(man);
ttm_set_driver_manager(&adev->mman.bdev, TTM_PL_VRAM, NULL);

Просмотреть файл

@ -1,89 +0,0 @@
/* SPDX-License-Identifier: MIT
* Copyright 2021 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
*/
#ifndef __AMDGPU_VRAM_MGR_H__
#define __AMDGPU_VRAM_MGR_H__
#include <drm/drm_buddy.h>
struct amdgpu_vram_mgr {
struct ttm_resource_manager manager;
struct drm_buddy mm;
/* protects access to buffer objects */
struct mutex lock;
struct list_head reservations_pending;
struct list_head reserved_pages;
atomic64_t vis_usage;
u64 default_page_size;
};
struct amdgpu_vram_mgr_resource {
struct ttm_resource base;
struct list_head blocks;
unsigned long flags;
};
static inline u64 amdgpu_vram_mgr_block_start(struct drm_buddy_block *block)
{
return drm_buddy_block_offset(block);
}
static inline u64 amdgpu_vram_mgr_block_size(struct drm_buddy_block *block)
{
return PAGE_SIZE << drm_buddy_block_order(block);
}
static inline struct drm_buddy_block *
amdgpu_vram_mgr_first_block(struct list_head *list)
{
return list_first_entry_or_null(list, struct drm_buddy_block, link);
}
static inline bool amdgpu_is_vram_mgr_blocks_contiguous(struct list_head *head)
{
struct drm_buddy_block *block;
u64 start, size;
block = amdgpu_vram_mgr_first_block(head);
if (!block)
return false;
while (head != block->link.next) {
start = amdgpu_vram_mgr_block_start(block);
size = amdgpu_vram_mgr_block_size(block);
block = list_entry(block->link.next, struct drm_buddy_block, link);
if (start + size != amdgpu_vram_mgr_block_start(block))
return false;
}
return true;
}
static inline struct amdgpu_vram_mgr_resource *
to_amdgpu_vram_mgr_resource(struct ttm_resource *res)
{
return container_of(res, struct amdgpu_vram_mgr_resource, base);
}
#endif

Просмотреть файл

@ -184,6 +184,8 @@ static void kfd_device_info_init(struct kfd_dev *kfd,
/* Navi2x+, Navi1x+ */
if (gc_version == IP_VERSION(10, 3, 6))
kfd->device_info.no_atomic_fw_version = 14;
else if (gc_version == IP_VERSION(10, 3, 7))
kfd->device_info.no_atomic_fw_version = 3;
else if (gc_version >= IP_VERSION(10, 3, 0))
kfd->device_info.no_atomic_fw_version = 92;
else if (gc_version >= IP_VERSION(10, 1, 1))

Просмотреть файл

@ -72,6 +72,7 @@
#include <linux/pci.h>
#include <linux/firmware.h>
#include <linux/component.h>
#include <linux/dmi.h>
#include <drm/display/drm_dp_mst_helper.h>
#include <drm/display/drm_hdmi_helper.h>
@ -462,6 +463,26 @@ static void dm_pflip_high_irq(void *interrupt_params)
vrr_active, (int) !e);
}
static void dm_crtc_handle_vblank(struct amdgpu_crtc *acrtc)
{
struct drm_crtc *crtc = &acrtc->base;
struct drm_device *dev = crtc->dev;
unsigned long flags;
drm_crtc_handle_vblank(crtc);
spin_lock_irqsave(&dev->event_lock, flags);
/* Send completion event for cursor-only commits */
if (acrtc->event && acrtc->pflip_status != AMDGPU_FLIP_SUBMITTED) {
drm_crtc_send_vblank_event(crtc, acrtc->event);
drm_crtc_vblank_put(crtc);
acrtc->event = NULL;
}
spin_unlock_irqrestore(&dev->event_lock, flags);
}
static void dm_vupdate_high_irq(void *interrupt_params)
{
struct common_irq_params *irq_params = interrupt_params;
@ -500,7 +521,7 @@ static void dm_vupdate_high_irq(void *interrupt_params)
* if a pageflip happened inside front-porch.
*/
if (vrr_active) {
drm_crtc_handle_vblank(&acrtc->base);
dm_crtc_handle_vblank(acrtc);
/* BTR processing for pre-DCE12 ASICs */
if (acrtc->dm_irq_params.stream &&
@ -552,7 +573,7 @@ static void dm_crtc_high_irq(void *interrupt_params)
* to dm_vupdate_high_irq after end of front-porch.
*/
if (!vrr_active)
drm_crtc_handle_vblank(&acrtc->base);
dm_crtc_handle_vblank(acrtc);
/**
* Following stuff must happen at start of vblank, for crc
@ -1382,6 +1403,41 @@ static bool dm_should_disable_stutter(struct pci_dev *pdev)
return false;
}
static const struct dmi_system_id hpd_disconnect_quirk_table[] = {
{
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
DMI_MATCH(DMI_PRODUCT_NAME, "Precision 3660"),
},
},
{
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
DMI_MATCH(DMI_PRODUCT_NAME, "Precision 3260"),
},
},
{
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
DMI_MATCH(DMI_PRODUCT_NAME, "Precision 3460"),
},
},
{}
};
static void retrieve_dmi_info(struct amdgpu_display_manager *dm)
{
const struct dmi_system_id *dmi_id;
dm->aux_hpd_discon_quirk = false;
dmi_id = dmi_first_match(hpd_disconnect_quirk_table);
if (dmi_id) {
dm->aux_hpd_discon_quirk = true;
DRM_INFO("aux_hpd_discon_quirk attached\n");
}
}
static int amdgpu_dm_init(struct amdgpu_device *adev)
{
struct dc_init_data init_data;
@ -1508,6 +1564,9 @@ static int amdgpu_dm_init(struct amdgpu_device *adev)
}
INIT_LIST_HEAD(&adev->dm.da_list);
retrieve_dmi_info(&adev->dm);
/* Display Core create. */
adev->dm.dc = dc_create(&init_data);
@ -5407,7 +5466,7 @@ fill_blending_from_plane_state(const struct drm_plane_state *plane_state,
}
}
if (per_pixel_alpha && plane_state->pixel_blend_mode == DRM_MODE_BLEND_COVERAGE)
if (*per_pixel_alpha && plane_state->pixel_blend_mode == DRM_MODE_BLEND_COVERAGE)
*pre_multiplied_alpha = false;
}
@ -9135,6 +9194,7 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
struct amdgpu_bo *abo;
uint32_t target_vblank, last_flip_vblank;
bool vrr_active = amdgpu_dm_vrr_active(acrtc_state);
bool cursor_update = false;
bool pflip_present = false;
struct {
struct dc_surface_update surface_updates[MAX_SURFACES];
@ -9170,8 +9230,13 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
struct dm_plane_state *dm_new_plane_state = to_dm_plane_state(new_plane_state);
/* Cursor plane is handled after stream updates */
if (plane->type == DRM_PLANE_TYPE_CURSOR)
if (plane->type == DRM_PLANE_TYPE_CURSOR) {
if ((fb && crtc == pcrtc) ||
(old_plane_state->fb && old_plane_state->crtc == pcrtc))
cursor_update = true;
continue;
}
if (!fb || !crtc || pcrtc != crtc)
continue;
@ -9334,6 +9399,17 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
bundle->stream_update.vrr_infopacket =
&acrtc_state->stream->vrr_infopacket;
}
} else if (cursor_update && acrtc_state->active_planes > 0 &&
!acrtc_state->force_dpms_off &&
acrtc_attach->base.state->event) {
drm_crtc_vblank_get(pcrtc);
spin_lock_irqsave(&pcrtc->dev->event_lock, flags);
acrtc_attach->event = acrtc_attach->base.state->event;
acrtc_attach->base.state->event = NULL;
spin_unlock_irqrestore(&pcrtc->dev->event_lock, flags);
}
/* Update the planes if changed or disable if we don't have any. */

Просмотреть файл

@ -540,6 +540,14 @@ struct amdgpu_display_manager {
* last successfully applied backlight values.
*/
u32 actual_brightness[AMDGPU_DM_MAX_NUM_EDP];
/**
* @aux_hpd_discon_quirk:
*
* quirk for hpd discon while aux is on-going.
* occurred on certain intel platform
*/
bool aux_hpd_discon_quirk;
};
enum dsc_clock_force_state {

Просмотреть файл

@ -56,6 +56,8 @@ static ssize_t dm_dp_aux_transfer(struct drm_dp_aux *aux,
ssize_t result = 0;
struct aux_payload payload;
enum aux_return_code_type operation_result;
struct amdgpu_device *adev;
struct ddc_service *ddc;
if (WARN_ON(msg->size > 16))
return -E2BIG;
@ -74,6 +76,21 @@ static ssize_t dm_dp_aux_transfer(struct drm_dp_aux *aux,
result = dc_link_aux_transfer_raw(TO_DM_AUX(aux)->ddc_service, &payload,
&operation_result);
/*
* w/a on certain intel platform where hpd is unexpected to pull low during
* 1st sideband message transaction by return AUX_RET_ERROR_HPD_DISCON
* aux transaction is succuess in such case, therefore bypass the error
*/
ddc = TO_DM_AUX(aux)->ddc_service;
adev = ddc->ctx->driver_context;
if (adev->dm.aux_hpd_discon_quirk) {
if (msg->address == DP_SIDEBAND_MSG_DOWN_REQ_BASE &&
operation_result == AUX_RET_ERROR_HPD_DISCON) {
result = 0;
operation_result = AUX_RET_SUCCESS;
}
}
if (payload.write && result >= 0)
result = msg->size;

Просмотреть файл

@ -1117,12 +1117,13 @@ bool resource_build_scaling_params(struct pipe_ctx *pipe_ctx)
* on certain displays, such as the Sharp 4k. 36bpp is needed
* to support SURFACE_PIXEL_FORMAT_GRPH_ARGB16161616 and
* SURFACE_PIXEL_FORMAT_GRPH_ABGR16161616 with actual > 10 bpc
* precision on at least DCN display engines. However, at least
* Carrizo with DCE_VERSION_11_0 does not like 36 bpp lb depth,
* so use only 30 bpp on DCE_VERSION_11_0. Testing with DCE 11.2 and 8.3
* did not show such problems, so this seems to be the exception.
* precision on DCN display engines, but apparently not for DCE, as
* far as testing on DCE-11.2 and DCE-8 showed. Various DCE parts have
* problems: Carrizo with DCE_VERSION_11_0 does not like 36 bpp lb depth,
* neither do DCE-8 at 4k resolution, or DCE-11.2 (broken identify pixel
* passthrough). Therefore only use 36 bpp on DCN where it is actually needed.
*/
if (plane_state->ctx->dce_version > DCE_VERSION_11_0)
if (plane_state->ctx->dce_version > DCE_VERSION_MAX)
pipe_ctx->plane_res.scl_data.lb_params.depth = LB_PIXEL_DEPTH_36BPP;
else
pipe_ctx->plane_res.scl_data.lb_params.depth = LB_PIXEL_DEPTH_30BPP;

Просмотреть файл

@ -1228,6 +1228,8 @@ int smu_v11_0_set_fan_speed_rpm(struct smu_context *smu,
uint32_t crystal_clock_freq = 2500;
uint32_t tach_period;
if (speed == 0)
return -EINVAL;
/*
* To prevent from possible overheat, some ASICs may have requirement
* for minimum fan speed:

Просмотреть файл

@ -60,6 +60,8 @@ __i915_gem_object_create_region(struct intel_memory_region *mem,
if (page_size)
default_page_size = page_size;
/* We should be able to fit a page within an sg entry */
GEM_BUG_ON(overflows_type(default_page_size, u32));
GEM_BUG_ON(!is_power_of_2_u64(default_page_size));
GEM_BUG_ON(default_page_size < PAGE_SIZE);

Просмотреть файл

@ -620,10 +620,15 @@ i915_ttm_resource_get_st(struct drm_i915_gem_object *obj,
struct ttm_resource *res)
{
struct ttm_buffer_object *bo = i915_gem_to_ttm(obj);
u32 page_alignment;
if (!i915_ttm_gtt_binds_lmem(res))
return i915_ttm_tt_get_st(bo->ttm);
page_alignment = bo->page_alignment << PAGE_SHIFT;
if (!page_alignment)
page_alignment = obj->mm.region->min_page_size;
/*
* If CPU mapping differs, we need to add the ttm_tt pages to
* the resulting st. Might make sense for GGTT.
@ -634,7 +639,8 @@ i915_ttm_resource_get_st(struct drm_i915_gem_object *obj,
struct i915_refct_sgt *rsgt;
rsgt = intel_region_ttm_resource_to_rsgt(obj->mm.region,
res);
res,
page_alignment);
if (IS_ERR(rsgt))
return rsgt;
@ -643,7 +649,8 @@ i915_ttm_resource_get_st(struct drm_i915_gem_object *obj,
return i915_refct_sgt_get(obj->ttm.cached_io_rsgt);
}
return intel_region_ttm_resource_to_rsgt(obj->mm.region, res);
return intel_region_ttm_resource_to_rsgt(obj->mm.region, res,
page_alignment);
}
static int i915_ttm_truncate(struct drm_i915_gem_object *obj)

Просмотреть файл

@ -9,6 +9,7 @@
#include <linux/jiffies.h>
#include "gt/intel_engine.h"
#include "gt/intel_rps.h"
#include "i915_gem_ioctls.h"
#include "i915_gem_object.h"
@ -31,6 +32,37 @@ i915_gem_object_wait_fence(struct dma_fence *fence,
timeout);
}
static void
i915_gem_object_boost(struct dma_resv *resv, unsigned int flags)
{
struct dma_resv_iter cursor;
struct dma_fence *fence;
/*
* Prescan all fences for potential boosting before we begin waiting.
*
* When we wait, we wait on outstanding fences serially. If the
* dma-resv contains a sequence such as 1:1, 1:2 instead of a reduced
* form 1:2, then as we look at each wait in turn we see that each
* request is currently executing and not worthy of boosting. But if
* we only happen to look at the final fence in the sequence (because
* of request coalescing or splitting between read/write arrays by
* the iterator), then we would boost. As such our decision to boost
* or not is delicately balanced on the order we wait on fences.
*
* So instead of looking for boosts sequentially, look for all boosts
* upfront and then wait on the outstanding fences.
*/
dma_resv_iter_begin(&cursor, resv,
dma_resv_usage_rw(flags & I915_WAIT_ALL));
dma_resv_for_each_fence_unlocked(&cursor, fence)
if (dma_fence_is_i915(fence) &&
!i915_request_started(to_request(fence)))
intel_rps_boost(to_request(fence));
dma_resv_iter_end(&cursor);
}
static long
i915_gem_object_wait_reservation(struct dma_resv *resv,
unsigned int flags,
@ -40,6 +72,8 @@ i915_gem_object_wait_reservation(struct dma_resv *resv,
struct dma_fence *fence;
long ret = timeout ?: 1;
i915_gem_object_boost(resv, flags);
dma_resv_iter_begin(&cursor, resv,
dma_resv_usage_rw(flags & I915_WAIT_ALL));
dma_resv_for_each_fence_unlocked(&cursor, fence) {

Просмотреть файл

@ -1209,6 +1209,20 @@ void intel_gt_invalidate_tlbs(struct intel_gt *gt)
mutex_lock(&gt->tlb_invalidate_lock);
intel_uncore_forcewake_get(uncore, FORCEWAKE_ALL);
spin_lock_irq(&uncore->lock); /* serialise invalidate with GT reset */
for_each_engine(engine, gt, id) {
struct reg_and_bit rb;
rb = get_reg_and_bit(engine, regs == gen8_regs, regs, num);
if (!i915_mmio_reg_offset(rb.reg))
continue;
intel_uncore_write_fw(uncore, rb.reg, rb.bit);
}
spin_unlock_irq(&uncore->lock);
for_each_engine(engine, gt, id) {
/*
* HW architecture suggest typical invalidation time at 40us,
@ -1223,7 +1237,6 @@ void intel_gt_invalidate_tlbs(struct intel_gt *gt)
if (!i915_mmio_reg_offset(rb.reg))
continue;
intel_uncore_write_fw(uncore, rb.reg, rb.bit);
if (__intel_wait_for_register_fw(uncore,
rb.reg, rb.bit, 0,
timeout_us, timeout_ms,

Просмотреть файл

@ -300,7 +300,7 @@ static int gen6_hw_domain_reset(struct intel_gt *gt, u32 hw_domain_mask)
return err;
}
static int gen6_reset_engines(struct intel_gt *gt,
static int __gen6_reset_engines(struct intel_gt *gt,
intel_engine_mask_t engine_mask,
unsigned int retry)
{
@ -321,6 +321,20 @@ static int gen6_reset_engines(struct intel_gt *gt,
return gen6_hw_domain_reset(gt, hw_mask);
}
static int gen6_reset_engines(struct intel_gt *gt,
intel_engine_mask_t engine_mask,
unsigned int retry)
{
unsigned long flags;
int ret;
spin_lock_irqsave(&gt->uncore->lock, flags);
ret = __gen6_reset_engines(gt, engine_mask, retry);
spin_unlock_irqrestore(&gt->uncore->lock, flags);
return ret;
}
static struct intel_engine_cs *find_sfc_paired_vecs_engine(struct intel_engine_cs *engine)
{
int vecs_id;
@ -487,7 +501,7 @@ static void gen11_unlock_sfc(struct intel_engine_cs *engine)
rmw_clear_fw(uncore, sfc_lock.lock_reg, sfc_lock.lock_bit);
}
static int gen11_reset_engines(struct intel_gt *gt,
static int __gen11_reset_engines(struct intel_gt *gt,
intel_engine_mask_t engine_mask,
unsigned int retry)
{
@ -583,8 +597,11 @@ static int gen8_reset_engines(struct intel_gt *gt,
struct intel_engine_cs *engine;
const bool reset_non_ready = retry >= 1;
intel_engine_mask_t tmp;
unsigned long flags;
int ret;
spin_lock_irqsave(&gt->uncore->lock, flags);
for_each_engine_masked(engine, gt, engine_mask, tmp) {
ret = gen8_engine_reset_prepare(engine);
if (ret && !reset_non_ready)
@ -612,17 +629,19 @@ static int gen8_reset_engines(struct intel_gt *gt,
* This is best effort, so ignore any error from the initial reset.
*/
if (IS_DG2(gt->i915) && engine_mask == ALL_ENGINES)
gen11_reset_engines(gt, gt->info.engine_mask, 0);
__gen11_reset_engines(gt, gt->info.engine_mask, 0);
if (GRAPHICS_VER(gt->i915) >= 11)
ret = gen11_reset_engines(gt, engine_mask, retry);
ret = __gen11_reset_engines(gt, engine_mask, retry);
else
ret = gen6_reset_engines(gt, engine_mask, retry);
ret = __gen6_reset_engines(gt, engine_mask, retry);
skip_reset:
for_each_engine_masked(engine, gt, engine_mask, tmp)
gen8_engine_reset_cancel(engine);
spin_unlock_irqrestore(&gt->uncore->lock, flags);
return ret;
}

Просмотреть файл

@ -176,8 +176,8 @@ static int live_lrc_layout(void *arg)
continue;
hw = shmem_pin_map(engine->default_state);
if (IS_ERR(hw)) {
err = PTR_ERR(hw);
if (!hw) {
err = -ENOMEM;
break;
}
hw += LRC_STATE_OFFSET / sizeof(*hw);
@ -365,8 +365,8 @@ static int live_lrc_fixed(void *arg)
continue;
hw = shmem_pin_map(engine->default_state);
if (IS_ERR(hw)) {
err = PTR_ERR(hw);
if (!hw) {
err = -ENOMEM;
break;
}
hw += LRC_STATE_OFFSET / sizeof(*hw);

Просмотреть файл

@ -3117,9 +3117,9 @@ void intel_gvt_update_reg_whitelist(struct intel_vgpu *vgpu)
continue;
vaddr = shmem_pin_map(engine->default_state);
if (IS_ERR(vaddr)) {
gvt_err("failed to map %s->default state, err:%zd\n",
engine->name, PTR_ERR(vaddr));
if (!vaddr) {
gvt_err("failed to map %s->default state\n",
engine->name);
return;
}

Просмотреть файл

@ -68,6 +68,7 @@ void i915_refct_sgt_init(struct i915_refct_sgt *rsgt, size_t size)
* drm_mm_node
* @node: The drm_mm_node.
* @region_start: An offset to add to the dma addresses of the sg list.
* @page_alignment: Required page alignment for each sg entry. Power of two.
*
* Create a struct sg_table, initializing it from a struct drm_mm_node,
* taking a maximum segment length into account, splitting into segments
@ -77,22 +78,25 @@ void i915_refct_sgt_init(struct i915_refct_sgt *rsgt, size_t size)
* error code cast to an error pointer on failure.
*/
struct i915_refct_sgt *i915_rsgt_from_mm_node(const struct drm_mm_node *node,
u64 region_start)
u64 region_start,
u32 page_alignment)
{
const u64 max_segment = SZ_1G; /* Do we have a limit on this? */
u64 segment_pages = max_segment >> PAGE_SHIFT;
const u32 max_segment = round_down(UINT_MAX, page_alignment);
const u32 segment_pages = max_segment >> PAGE_SHIFT;
u64 block_size, offset, prev_end;
struct i915_refct_sgt *rsgt;
struct sg_table *st;
struct scatterlist *sg;
GEM_BUG_ON(!max_segment);
rsgt = kmalloc(sizeof(*rsgt), GFP_KERNEL);
if (!rsgt)
return ERR_PTR(-ENOMEM);
i915_refct_sgt_init(rsgt, node->size << PAGE_SHIFT);
st = &rsgt->table;
if (sg_alloc_table(st, DIV_ROUND_UP(node->size, segment_pages),
if (sg_alloc_table(st, DIV_ROUND_UP_ULL(node->size, segment_pages),
GFP_KERNEL)) {
i915_refct_sgt_put(rsgt);
return ERR_PTR(-ENOMEM);
@ -112,12 +116,14 @@ struct i915_refct_sgt *i915_rsgt_from_mm_node(const struct drm_mm_node *node,
sg = __sg_next(sg);
sg_dma_address(sg) = region_start + offset;
GEM_BUG_ON(!IS_ALIGNED(sg_dma_address(sg),
page_alignment));
sg_dma_len(sg) = 0;
sg->length = 0;
st->nents++;
}
len = min(block_size, max_segment - sg->length);
len = min_t(u64, block_size, max_segment - sg->length);
sg->length += len;
sg_dma_len(sg) += len;
@ -138,6 +144,7 @@ struct i915_refct_sgt *i915_rsgt_from_mm_node(const struct drm_mm_node *node,
* i915_buddy_block list
* @res: The struct i915_ttm_buddy_resource.
* @region_start: An offset to add to the dma addresses of the sg list.
* @page_alignment: Required page alignment for each sg entry. Power of two.
*
* Create a struct sg_table, initializing it from struct i915_buddy_block list,
* taking a maximum segment length into account, splitting into segments
@ -147,11 +154,12 @@ struct i915_refct_sgt *i915_rsgt_from_mm_node(const struct drm_mm_node *node,
* error code cast to an error pointer on failure.
*/
struct i915_refct_sgt *i915_rsgt_from_buddy_resource(struct ttm_resource *res,
u64 region_start)
u64 region_start,
u32 page_alignment)
{
struct i915_ttm_buddy_resource *bman_res = to_ttm_buddy_resource(res);
const u64 size = res->num_pages << PAGE_SHIFT;
const u64 max_segment = rounddown(UINT_MAX, PAGE_SIZE);
const u32 max_segment = round_down(UINT_MAX, page_alignment);
struct drm_buddy *mm = bman_res->mm;
struct list_head *blocks = &bman_res->blocks;
struct drm_buddy_block *block;
@ -161,6 +169,7 @@ struct i915_refct_sgt *i915_rsgt_from_buddy_resource(struct ttm_resource *res,
resource_size_t prev_end;
GEM_BUG_ON(list_empty(blocks));
GEM_BUG_ON(!max_segment);
rsgt = kmalloc(sizeof(*rsgt), GFP_KERNEL);
if (!rsgt)
@ -191,12 +200,14 @@ struct i915_refct_sgt *i915_rsgt_from_buddy_resource(struct ttm_resource *res,
sg = __sg_next(sg);
sg_dma_address(sg) = region_start + offset;
GEM_BUG_ON(!IS_ALIGNED(sg_dma_address(sg),
page_alignment));
sg_dma_len(sg) = 0;
sg->length = 0;
st->nents++;
}
len = min(block_size, max_segment - sg->length);
len = min_t(u64, block_size, max_segment - sg->length);
sg->length += len;
sg_dma_len(sg) += len;

Просмотреть файл

@ -213,9 +213,11 @@ static inline void __i915_refct_sgt_init(struct i915_refct_sgt *rsgt,
void i915_refct_sgt_init(struct i915_refct_sgt *rsgt, size_t size);
struct i915_refct_sgt *i915_rsgt_from_mm_node(const struct drm_mm_node *node,
u64 region_start);
u64 region_start,
u32 page_alignment);
struct i915_refct_sgt *i915_rsgt_from_buddy_resource(struct ttm_resource *res,
u64 region_start);
u64 region_start,
u32 page_alignment);
#endif

Просмотреть файл

@ -152,6 +152,7 @@ int intel_region_ttm_fini(struct intel_memory_region *mem)
* Convert an opaque TTM resource manager resource to a refcounted sg_table.
* @mem: The memory region.
* @res: The resource manager resource obtained from the TTM resource manager.
* @page_alignment: Required page alignment for each sg entry. Power of two.
*
* The gem backends typically use sg-tables for operations on the underlying
* io_memory. So provide a way for the backends to translate the
@ -161,16 +162,19 @@ int intel_region_ttm_fini(struct intel_memory_region *mem)
*/
struct i915_refct_sgt *
intel_region_ttm_resource_to_rsgt(struct intel_memory_region *mem,
struct ttm_resource *res)
struct ttm_resource *res,
u32 page_alignment)
{
if (mem->is_range_manager) {
struct ttm_range_mgr_node *range_node =
to_ttm_range_mgr_node(res);
return i915_rsgt_from_mm_node(&range_node->mm_nodes[0],
mem->region.start);
mem->region.start,
page_alignment);
} else {
return i915_rsgt_from_buddy_resource(res, mem->region.start);
return i915_rsgt_from_buddy_resource(res, mem->region.start,
page_alignment);
}
}

Просмотреть файл

@ -24,7 +24,8 @@ int intel_region_ttm_fini(struct intel_memory_region *mem);
struct i915_refct_sgt *
intel_region_ttm_resource_to_rsgt(struct intel_memory_region *mem,
struct ttm_resource *res);
struct ttm_resource *res,
u32 page_alignment);
void intel_region_ttm_resource_free(struct intel_memory_region *mem,
struct ttm_resource *res);

Просмотреть файл

@ -742,7 +742,7 @@ static int pot_hole(struct i915_address_space *vm,
u64 addr;
for (addr = round_up(hole_start + min_alignment, step) - min_alignment;
addr <= round_down(hole_end - (2 * min_alignment), step) - min_alignment;
hole_end > addr && hole_end - addr >= 2 * min_alignment;
addr += step) {
err = i915_vma_pin(vma, 0, 0, addr | flags);
if (err) {

Просмотреть файл

@ -451,7 +451,6 @@ out_put:
static int igt_mock_max_segment(void *arg)
{
const unsigned int max_segment = rounddown(UINT_MAX, PAGE_SIZE);
struct intel_memory_region *mem = arg;
struct drm_i915_private *i915 = mem->i915;
struct i915_ttm_buddy_resource *res;
@ -460,7 +459,10 @@ static int igt_mock_max_segment(void *arg)
struct drm_buddy *mm;
struct list_head *blocks;
struct scatterlist *sg;
I915_RND_STATE(prng);
LIST_HEAD(objects);
unsigned int max_segment;
unsigned int ps;
u64 size;
int err = 0;
@ -472,7 +474,13 @@ static int igt_mock_max_segment(void *arg)
*/
size = SZ_8G;
mem = mock_region_create(i915, 0, size, PAGE_SIZE, 0, 0);
ps = PAGE_SIZE;
if (i915_prandom_u64_state(&prng) & 1)
ps = SZ_64K; /* For something like DG2 */
max_segment = round_down(UINT_MAX, ps);
mem = mock_region_create(i915, 0, size, ps, 0, 0);
if (IS_ERR(mem))
return PTR_ERR(mem);
@ -498,12 +506,21 @@ static int igt_mock_max_segment(void *arg)
}
for (sg = obj->mm.pages->sgl; sg; sg = sg_next(sg)) {
dma_addr_t daddr = sg_dma_address(sg);
if (sg->length > max_segment) {
pr_err("%s: Created an oversized scatterlist entry, %u > %u\n",
__func__, sg->length, max_segment);
err = -EINVAL;
goto out_close;
}
if (!IS_ALIGNED(daddr, ps)) {
pr_err("%s: Created an unaligned scatterlist entry, addr=%pa, ps=%u\n",
__func__, &daddr, ps);
err = -EINVAL;
goto out_close;
}
}
out_close:

Просмотреть файл

@ -33,7 +33,8 @@ static int mock_region_get_pages(struct drm_i915_gem_object *obj)
return PTR_ERR(obj->mm.res);
obj->mm.rsgt = intel_region_ttm_resource_to_rsgt(obj->mm.region,
obj->mm.res);
obj->mm.res,
obj->mm.region->min_page_size);
if (IS_ERR(obj->mm.rsgt)) {
err = PTR_ERR(obj->mm.rsgt);
goto err_free_resource;

Просмотреть файл

@ -4231,10 +4231,6 @@ void irdma_cm_teardown_connections(struct irdma_device *iwdev, u32 *ipaddr,
struct irdma_cm_node *cm_node;
struct list_head teardown_list;
struct ib_qp_attr attr;
struct irdma_sc_vsi *vsi = &iwdev->vsi;
struct irdma_sc_qp *sc_qp;
struct irdma_qp *qp;
int i;
INIT_LIST_HEAD(&teardown_list);
@ -4251,52 +4247,6 @@ void irdma_cm_teardown_connections(struct irdma_device *iwdev, u32 *ipaddr,
irdma_cm_disconn(cm_node->iwqp);
irdma_rem_ref_cm_node(cm_node);
}
if (!iwdev->roce_mode)
return;
INIT_LIST_HEAD(&teardown_list);
for (i = 0; i < IRDMA_MAX_USER_PRIORITY; i++) {
mutex_lock(&vsi->qos[i].qos_mutex);
list_for_each_safe (list_node, list_core_temp,
&vsi->qos[i].qplist) {
u32 qp_ip[4];
sc_qp = container_of(list_node, struct irdma_sc_qp,
list);
if (sc_qp->qp_uk.qp_type != IRDMA_QP_TYPE_ROCE_RC)
continue;
qp = sc_qp->qp_uk.back_qp;
if (!disconnect_all) {
if (nfo->ipv4)
qp_ip[0] = qp->udp_info.local_ipaddr[3];
else
memcpy(qp_ip,
&qp->udp_info.local_ipaddr[0],
sizeof(qp_ip));
}
if (disconnect_all ||
(nfo->vlan_id == (qp->udp_info.vlan_tag & VLAN_VID_MASK) &&
!memcmp(qp_ip, ipaddr, nfo->ipv4 ? 4 : 16))) {
spin_lock(&iwdev->rf->qptable_lock);
if (iwdev->rf->qp_table[sc_qp->qp_uk.qp_id]) {
irdma_qp_add_ref(&qp->ibqp);
list_add(&qp->teardown_entry,
&teardown_list);
}
spin_unlock(&iwdev->rf->qptable_lock);
}
}
mutex_unlock(&vsi->qos[i].qos_mutex);
}
list_for_each_safe (list_node, list_core_temp, &teardown_list) {
qp = container_of(list_node, struct irdma_qp, teardown_entry);
attr.qp_state = IB_QPS_ERR;
irdma_modify_qp_roce(&qp->ibqp, &attr, IB_QP_STATE, NULL);
irdma_qp_rem_ref(&qp->ibqp);
}
}
/**

Просмотреть файл

@ -201,6 +201,7 @@ void i40iw_init_hw(struct irdma_sc_dev *dev)
dev->hw_attrs.uk_attrs.max_hw_read_sges = I40IW_MAX_SGE_RD;
dev->hw_attrs.max_hw_device_pages = I40IW_MAX_PUSH_PAGE_COUNT;
dev->hw_attrs.uk_attrs.max_hw_inline = I40IW_MAX_INLINE_DATA_SIZE;
dev->hw_attrs.page_size_cap = SZ_4K | SZ_2M;
dev->hw_attrs.max_hw_ird = I40IW_MAX_IRD_SIZE;
dev->hw_attrs.max_hw_ord = I40IW_MAX_ORD_SIZE;
dev->hw_attrs.max_hw_wqes = I40IW_MAX_WQ_ENTRIES;

Просмотреть файл

@ -139,6 +139,7 @@ void icrdma_init_hw(struct irdma_sc_dev *dev)
dev->cqp_db = dev->hw_regs[IRDMA_CQPDB];
dev->cq_ack_db = dev->hw_regs[IRDMA_CQACK];
dev->irq_ops = &icrdma_irq_ops;
dev->hw_attrs.page_size_cap = SZ_4K | SZ_2M | SZ_1G;
dev->hw_attrs.max_hw_ird = ICRDMA_MAX_IRD_SIZE;
dev->hw_attrs.max_hw_ord = ICRDMA_MAX_ORD_SIZE;
dev->hw_attrs.max_stat_inst = ICRDMA_MAX_STATS_COUNT;

Просмотреть файл

@ -127,6 +127,7 @@ struct irdma_hw_attrs {
u64 max_hw_outbound_msg_size;
u64 max_hw_inbound_msg_size;
u64 max_mr_size;
u64 page_size_cap;
u32 min_hw_qp_id;
u32 min_hw_aeq_size;
u32 max_hw_aeq_size;

Просмотреть файл

@ -32,7 +32,7 @@ static int irdma_query_device(struct ib_device *ibdev,
props->vendor_part_id = pcidev->device;
props->hw_ver = rf->pcidev->revision;
props->page_size_cap = SZ_4K | SZ_2M | SZ_1G;
props->page_size_cap = hw_attrs->page_size_cap;
props->max_mr_size = hw_attrs->max_mr_size;
props->max_qp = rf->max_qp - rf->used_qps;
props->max_qp_wr = hw_attrs->max_qp_wr;
@ -2781,7 +2781,7 @@ static struct ib_mr *irdma_reg_user_mr(struct ib_pd *pd, u64 start, u64 len,
if (req.reg_type == IRDMA_MEMREG_TYPE_MEM) {
iwmr->page_size = ib_umem_find_best_pgsz(region,
SZ_4K | SZ_2M | SZ_1G,
iwdev->rf->sc_dev.hw_attrs.page_size_cap,
virt);
if (unlikely(!iwmr->page_size)) {
kfree(iwmr);

Просмотреть файл

@ -900,6 +900,11 @@ static int goodix_add_acpi_gpio_mappings(struct goodix_ts_data *ts)
} else {
dev_warn(dev, "Unexpected ACPI resources: gpio_count %d, gpio_int_idx %d\n",
ts->gpio_count, ts->gpio_int_idx);
/*
* On some devices _PS0 does a reset for us and
* sometimes this is necessary for things to work.
*/
acpi_device_fix_up_power(ACPI_COMPANION(dev));
return -EINVAL;
}

Просмотреть файл

@ -1654,6 +1654,9 @@ static int usbtouch_probe(struct usb_interface *intf,
if (id->driver_info == DEVTYPE_IGNORE)
return -ENODEV;
if (id->driver_info >= ARRAY_SIZE(usbtouch_dev_info))
return -ENODEV;
endpoint = usbtouch_get_input_endpoint(intf->cur_altsetting);
if (!endpoint)
return -ENXIO;

Просмотреть файл

@ -758,7 +758,9 @@ batt_err:
static int wm97xx_mfd_remove(struct platform_device *pdev)
{
return wm97xx_remove(&pdev->dev);
wm97xx_remove(&pdev->dev);
return 0;
}
static int __maybe_unused wm97xx_suspend(struct device *dev)

Просмотреть файл

@ -563,7 +563,7 @@ static struct sk_buff *amt_build_igmp_gq(struct amt_dev *amt)
ihv3->nsrcs = 0;
ihv3->resv = 0;
ihv3->suppress = false;
ihv3->qrv = amt->net->ipv4.sysctl_igmp_qrv;
ihv3->qrv = READ_ONCE(amt->net->ipv4.sysctl_igmp_qrv);
ihv3->csum = 0;
csum = &ihv3->csum;
csum_start = (void *)ihv3;
@ -577,14 +577,14 @@ static struct sk_buff *amt_build_igmp_gq(struct amt_dev *amt)
return skb;
}
static void __amt_update_gw_status(struct amt_dev *amt, enum amt_status status,
static void amt_update_gw_status(struct amt_dev *amt, enum amt_status status,
bool validate)
{
if (validate && amt->status >= status)
return;
netdev_dbg(amt->dev, "Update GW status %s -> %s",
status_str[amt->status], status_str[status]);
amt->status = status;
WRITE_ONCE(amt->status, status);
}
static void __amt_update_relay_status(struct amt_tunnel_list *tunnel,
@ -600,14 +600,6 @@ static void __amt_update_relay_status(struct amt_tunnel_list *tunnel,
tunnel->status = status;
}
static void amt_update_gw_status(struct amt_dev *amt, enum amt_status status,
bool validate)
{
spin_lock_bh(&amt->lock);
__amt_update_gw_status(amt, status, validate);
spin_unlock_bh(&amt->lock);
}
static void amt_update_relay_status(struct amt_tunnel_list *tunnel,
enum amt_status status, bool validate)
{
@ -700,9 +692,7 @@ static void amt_send_discovery(struct amt_dev *amt)
if (unlikely(net_xmit_eval(err)))
amt->dev->stats.tx_errors++;
spin_lock_bh(&amt->lock);
__amt_update_gw_status(amt, AMT_STATUS_SENT_DISCOVERY, true);
spin_unlock_bh(&amt->lock);
amt_update_gw_status(amt, AMT_STATUS_SENT_DISCOVERY, true);
out:
rcu_read_unlock();
}
@ -900,6 +890,28 @@ static void amt_send_mld_gq(struct amt_dev *amt, struct amt_tunnel_list *tunnel)
}
#endif
static bool amt_queue_event(struct amt_dev *amt, enum amt_event event,
struct sk_buff *skb)
{
int index;
spin_lock_bh(&amt->lock);
if (amt->nr_events >= AMT_MAX_EVENTS) {
spin_unlock_bh(&amt->lock);
return 1;
}
index = (amt->event_idx + amt->nr_events) % AMT_MAX_EVENTS;
amt->events[index].event = event;
amt->events[index].skb = skb;
amt->nr_events++;
amt->event_idx %= AMT_MAX_EVENTS;
queue_work(amt_wq, &amt->event_wq);
spin_unlock_bh(&amt->lock);
return 0;
}
static void amt_secret_work(struct work_struct *work)
{
struct amt_dev *amt = container_of(to_delayed_work(work),
@ -913,24 +925,61 @@ static void amt_secret_work(struct work_struct *work)
msecs_to_jiffies(AMT_SECRET_TIMEOUT));
}
static void amt_event_send_discovery(struct amt_dev *amt)
{
if (amt->status > AMT_STATUS_SENT_DISCOVERY)
goto out;
get_random_bytes(&amt->nonce, sizeof(__be32));
amt_send_discovery(amt);
out:
mod_delayed_work(amt_wq, &amt->discovery_wq,
msecs_to_jiffies(AMT_DISCOVERY_TIMEOUT));
}
static void amt_discovery_work(struct work_struct *work)
{
struct amt_dev *amt = container_of(to_delayed_work(work),
struct amt_dev,
discovery_wq);
spin_lock_bh(&amt->lock);
if (amt->status > AMT_STATUS_SENT_DISCOVERY)
goto out;
get_random_bytes(&amt->nonce, sizeof(__be32));
spin_unlock_bh(&amt->lock);
amt_send_discovery(amt);
spin_lock_bh(&amt->lock);
out:
if (amt_queue_event(amt, AMT_EVENT_SEND_DISCOVERY, NULL))
mod_delayed_work(amt_wq, &amt->discovery_wq,
msecs_to_jiffies(AMT_DISCOVERY_TIMEOUT));
spin_unlock_bh(&amt->lock);
}
static void amt_event_send_request(struct amt_dev *amt)
{
u32 exp;
if (amt->status < AMT_STATUS_RECEIVED_ADVERTISEMENT)
goto out;
if (amt->req_cnt > AMT_MAX_REQ_COUNT) {
netdev_dbg(amt->dev, "Gateway is not ready");
amt->qi = AMT_INIT_REQ_TIMEOUT;
WRITE_ONCE(amt->ready4, false);
WRITE_ONCE(amt->ready6, false);
amt->remote_ip = 0;
amt_update_gw_status(amt, AMT_STATUS_INIT, false);
amt->req_cnt = 0;
amt->nonce = 0;
goto out;
}
if (!amt->req_cnt) {
WRITE_ONCE(amt->ready4, false);
WRITE_ONCE(amt->ready6, false);
get_random_bytes(&amt->nonce, sizeof(__be32));
}
amt_send_request(amt, false);
amt_send_request(amt, true);
amt_update_gw_status(amt, AMT_STATUS_SENT_REQUEST, true);
amt->req_cnt++;
out:
exp = min_t(u32, (1 * (1 << amt->req_cnt)), AMT_MAX_REQ_TIMEOUT);
mod_delayed_work(amt_wq, &amt->req_wq, msecs_to_jiffies(exp * 1000));
}
static void amt_req_work(struct work_struct *work)
@ -938,33 +987,10 @@ static void amt_req_work(struct work_struct *work)
struct amt_dev *amt = container_of(to_delayed_work(work),
struct amt_dev,
req_wq);
u32 exp;
spin_lock_bh(&amt->lock);
if (amt->status < AMT_STATUS_RECEIVED_ADVERTISEMENT)
goto out;
if (amt->req_cnt > AMT_MAX_REQ_COUNT) {
netdev_dbg(amt->dev, "Gateway is not ready");
amt->qi = AMT_INIT_REQ_TIMEOUT;
amt->ready4 = false;
amt->ready6 = false;
amt->remote_ip = 0;
__amt_update_gw_status(amt, AMT_STATUS_INIT, false);
amt->req_cnt = 0;
goto out;
}
spin_unlock_bh(&amt->lock);
amt_send_request(amt, false);
amt_send_request(amt, true);
spin_lock_bh(&amt->lock);
__amt_update_gw_status(amt, AMT_STATUS_SENT_REQUEST, true);
amt->req_cnt++;
out:
exp = min_t(u32, (1 * (1 << amt->req_cnt)), AMT_MAX_REQ_TIMEOUT);
mod_delayed_work(amt_wq, &amt->req_wq, msecs_to_jiffies(exp * 1000));
spin_unlock_bh(&amt->lock);
if (amt_queue_event(amt, AMT_EVENT_SEND_REQUEST, NULL))
mod_delayed_work(amt_wq, &amt->req_wq,
msecs_to_jiffies(100));
}
static bool amt_send_membership_update(struct amt_dev *amt,
@ -1220,7 +1246,8 @@ static netdev_tx_t amt_dev_xmit(struct sk_buff *skb, struct net_device *dev)
/* Gateway only passes IGMP/MLD packets */
if (!report)
goto free;
if ((!v6 && !amt->ready4) || (v6 && !amt->ready6))
if ((!v6 && !READ_ONCE(amt->ready4)) ||
(v6 && !READ_ONCE(amt->ready6)))
goto free;
if (amt_send_membership_update(amt, skb, v6))
goto free;
@ -2236,6 +2263,10 @@ static bool amt_advertisement_handler(struct amt_dev *amt, struct sk_buff *skb)
ipv4_is_zeronet(amta->ip4))
return true;
if (amt->status != AMT_STATUS_SENT_DISCOVERY ||
amt->nonce != amta->nonce)
return true;
amt->remote_ip = amta->ip4;
netdev_dbg(amt->dev, "advertised remote ip = %pI4\n", &amt->remote_ip);
mod_delayed_work(amt_wq, &amt->req_wq, 0);
@ -2251,6 +2282,9 @@ static bool amt_multicast_data_handler(struct amt_dev *amt, struct sk_buff *skb)
struct ethhdr *eth;
struct iphdr *iph;
if (READ_ONCE(amt->status) != AMT_STATUS_SENT_UPDATE)
return true;
hdr_size = sizeof(*amtmd) + sizeof(struct udphdr);
if (!pskb_may_pull(skb, hdr_size))
return true;
@ -2325,6 +2359,9 @@ static bool amt_membership_query_handler(struct amt_dev *amt,
if (amtmq->reserved || amtmq->version)
return true;
if (amtmq->nonce != amt->nonce)
return true;
hdr_size -= sizeof(*eth);
if (iptunnel_pull_header(skb, hdr_size, htons(ETH_P_TEB), false))
return true;
@ -2339,6 +2376,9 @@ static bool amt_membership_query_handler(struct amt_dev *amt,
iph = ip_hdr(skb);
if (iph->version == 4) {
if (READ_ONCE(amt->ready4))
return true;
if (!pskb_may_pull(skb, sizeof(*iph) + AMT_IPHDR_OPTS +
sizeof(*ihv3)))
return true;
@ -2349,12 +2389,10 @@ static bool amt_membership_query_handler(struct amt_dev *amt,
ihv3 = skb_pull(skb, sizeof(*iph) + AMT_IPHDR_OPTS);
skb_reset_transport_header(skb);
skb_push(skb, sizeof(*iph) + AMT_IPHDR_OPTS);
spin_lock_bh(&amt->lock);
amt->ready4 = true;
WRITE_ONCE(amt->ready4, true);
amt->mac = amtmq->response_mac;
amt->req_cnt = 0;
amt->qi = ihv3->qqic;
spin_unlock_bh(&amt->lock);
skb->protocol = htons(ETH_P_IP);
eth->h_proto = htons(ETH_P_IP);
ip_eth_mc_map(iph->daddr, eth->h_dest);
@ -2363,6 +2401,9 @@ static bool amt_membership_query_handler(struct amt_dev *amt,
struct mld2_query *mld2q;
struct ipv6hdr *ip6h;
if (READ_ONCE(amt->ready6))
return true;
if (!pskb_may_pull(skb, sizeof(*ip6h) + AMT_IP6HDR_OPTS +
sizeof(*mld2q)))
return true;
@ -2374,12 +2415,10 @@ static bool amt_membership_query_handler(struct amt_dev *amt,
mld2q = skb_pull(skb, sizeof(*ip6h) + AMT_IP6HDR_OPTS);
skb_reset_transport_header(skb);
skb_push(skb, sizeof(*ip6h) + AMT_IP6HDR_OPTS);
spin_lock_bh(&amt->lock);
amt->ready6 = true;
WRITE_ONCE(amt->ready6, true);
amt->mac = amtmq->response_mac;
amt->req_cnt = 0;
amt->qi = mld2q->mld2q_qqic;
spin_unlock_bh(&amt->lock);
skb->protocol = htons(ETH_P_IPV6);
eth->h_proto = htons(ETH_P_IPV6);
ipv6_eth_mc_map(&ip6h->daddr, eth->h_dest);
@ -2392,12 +2431,14 @@ static bool amt_membership_query_handler(struct amt_dev *amt,
skb->pkt_type = PACKET_MULTICAST;
skb->ip_summed = CHECKSUM_NONE;
len = skb->len;
local_bh_disable();
if (__netif_rx(skb) == NET_RX_SUCCESS) {
amt_update_gw_status(amt, AMT_STATUS_RECEIVED_QUERY, true);
dev_sw_netstats_rx_add(amt->dev, len);
} else {
amt->dev->stats.rx_dropped++;
}
local_bh_enable();
return false;
}
@ -2638,7 +2679,9 @@ static bool amt_request_handler(struct amt_dev *amt, struct sk_buff *skb)
if (tunnel->ip4 == iph->saddr)
goto send;
spin_lock_bh(&amt->lock);
if (amt->nr_tunnels >= amt->max_tunnels) {
spin_unlock_bh(&amt->lock);
icmp_ndo_send(skb, ICMP_DEST_UNREACH, ICMP_HOST_UNREACH, 0);
return true;
}
@ -2646,8 +2689,10 @@ static bool amt_request_handler(struct amt_dev *amt, struct sk_buff *skb)
tunnel = kzalloc(sizeof(*tunnel) +
(sizeof(struct hlist_head) * amt->hash_buckets),
GFP_ATOMIC);
if (!tunnel)
if (!tunnel) {
spin_unlock_bh(&amt->lock);
return true;
}
tunnel->source_port = udph->source;
tunnel->ip4 = iph->saddr;
@ -2660,10 +2705,9 @@ static bool amt_request_handler(struct amt_dev *amt, struct sk_buff *skb)
INIT_DELAYED_WORK(&tunnel->gc_wq, amt_tunnel_expire);
spin_lock_bh(&amt->lock);
list_add_tail_rcu(&tunnel->list, &amt->tunnel_list);
tunnel->key = amt->key;
amt_update_relay_status(tunnel, AMT_STATUS_RECEIVED_REQUEST, true);
__amt_update_relay_status(tunnel, AMT_STATUS_RECEIVED_REQUEST, true);
amt->nr_tunnels++;
mod_delayed_work(amt_wq, &tunnel->gc_wq,
msecs_to_jiffies(amt_gmi(amt)));
@ -2688,6 +2732,38 @@ send:
return false;
}
static void amt_gw_rcv(struct amt_dev *amt, struct sk_buff *skb)
{
int type = amt_parse_type(skb);
int err = 1;
if (type == -1)
goto drop;
if (amt->mode == AMT_MODE_GATEWAY) {
switch (type) {
case AMT_MSG_ADVERTISEMENT:
err = amt_advertisement_handler(amt, skb);
break;
case AMT_MSG_MEMBERSHIP_QUERY:
err = amt_membership_query_handler(amt, skb);
if (!err)
return;
break;
default:
netdev_dbg(amt->dev, "Invalid type of Gateway\n");
break;
}
}
drop:
if (err) {
amt->dev->stats.rx_dropped++;
kfree_skb(skb);
} else {
consume_skb(skb);
}
}
static int amt_rcv(struct sock *sk, struct sk_buff *skb)
{
struct amt_dev *amt;
@ -2719,8 +2795,12 @@ static int amt_rcv(struct sock *sk, struct sk_buff *skb)
err = true;
goto drop;
}
err = amt_advertisement_handler(amt, skb);
break;
if (amt_queue_event(amt, AMT_EVENT_RECEIVE, skb)) {
netdev_dbg(amt->dev, "AMT Event queue full\n");
err = true;
goto drop;
}
goto out;
case AMT_MSG_MULTICAST_DATA:
if (iph->saddr != amt->remote_ip) {
netdev_dbg(amt->dev, "Invalid Relay IP\n");
@ -2738,10 +2818,11 @@ static int amt_rcv(struct sock *sk, struct sk_buff *skb)
err = true;
goto drop;
}
err = amt_membership_query_handler(amt, skb);
if (err)
if (amt_queue_event(amt, AMT_EVENT_RECEIVE, skb)) {
netdev_dbg(amt->dev, "AMT Event queue full\n");
err = true;
goto drop;
else
}
goto out;
default:
err = true;
@ -2780,6 +2861,46 @@ out:
return 0;
}
static void amt_event_work(struct work_struct *work)
{
struct amt_dev *amt = container_of(work, struct amt_dev, event_wq);
struct sk_buff *skb;
u8 event;
int i;
for (i = 0; i < AMT_MAX_EVENTS; i++) {
spin_lock_bh(&amt->lock);
if (amt->nr_events == 0) {
spin_unlock_bh(&amt->lock);
return;
}
event = amt->events[amt->event_idx].event;
skb = amt->events[amt->event_idx].skb;
amt->events[amt->event_idx].event = AMT_EVENT_NONE;
amt->events[amt->event_idx].skb = NULL;
amt->nr_events--;
amt->event_idx++;
amt->event_idx %= AMT_MAX_EVENTS;
spin_unlock_bh(&amt->lock);
switch (event) {
case AMT_EVENT_RECEIVE:
amt_gw_rcv(amt, skb);
break;
case AMT_EVENT_SEND_DISCOVERY:
amt_event_send_discovery(amt);
break;
case AMT_EVENT_SEND_REQUEST:
amt_event_send_request(amt);
break;
default:
if (skb)
kfree_skb(skb);
break;
}
}
}
static int amt_err_lookup(struct sock *sk, struct sk_buff *skb)
{
struct amt_dev *amt;
@ -2804,7 +2925,7 @@ static int amt_err_lookup(struct sock *sk, struct sk_buff *skb)
break;
case AMT_MSG_REQUEST:
case AMT_MSG_MEMBERSHIP_UPDATE:
if (amt->status >= AMT_STATUS_RECEIVED_ADVERTISEMENT)
if (READ_ONCE(amt->status) >= AMT_STATUS_RECEIVED_ADVERTISEMENT)
mod_delayed_work(amt_wq, &amt->req_wq, 0);
break;
default:
@ -2867,6 +2988,8 @@ static int amt_dev_open(struct net_device *dev)
amt->ready4 = false;
amt->ready6 = false;
amt->event_idx = 0;
amt->nr_events = 0;
err = amt_socket_create(amt);
if (err)
@ -2874,6 +2997,7 @@ static int amt_dev_open(struct net_device *dev)
amt->req_cnt = 0;
amt->remote_ip = 0;
amt->nonce = 0;
get_random_bytes(&amt->key, sizeof(siphash_key_t));
amt->status = AMT_STATUS_INIT;
@ -2892,6 +3016,8 @@ static int amt_dev_stop(struct net_device *dev)
struct amt_dev *amt = netdev_priv(dev);
struct amt_tunnel_list *tunnel, *tmp;
struct socket *sock;
struct sk_buff *skb;
int i;
cancel_delayed_work_sync(&amt->req_wq);
cancel_delayed_work_sync(&amt->discovery_wq);
@ -2904,6 +3030,15 @@ static int amt_dev_stop(struct net_device *dev)
if (sock)
udp_tunnel_sock_release(sock);
cancel_work_sync(&amt->event_wq);
for (i = 0; i < AMT_MAX_EVENTS; i++) {
skb = amt->events[i].skb;
if (skb)
kfree_skb(skb);
amt->events[i].event = AMT_EVENT_NONE;
amt->events[i].skb = NULL;
}
amt->ready4 = false;
amt->ready6 = false;
amt->req_cnt = 0;
@ -3095,7 +3230,7 @@ static int amt_newlink(struct net *net, struct net_device *dev,
goto err;
}
if (amt->mode == AMT_MODE_RELAY) {
amt->qrv = amt->net->ipv4.sysctl_igmp_qrv;
amt->qrv = READ_ONCE(amt->net->ipv4.sysctl_igmp_qrv);
amt->qri = 10;
dev->needed_headroom = amt->stream_dev->needed_headroom +
AMT_RELAY_HLEN;
@ -3146,8 +3281,8 @@ static int amt_newlink(struct net *net, struct net_device *dev,
INIT_DELAYED_WORK(&amt->discovery_wq, amt_discovery_work);
INIT_DELAYED_WORK(&amt->req_wq, amt_req_work);
INIT_DELAYED_WORK(&amt->secret_wq, amt_secret_work);
INIT_WORK(&amt->event_wq, amt_event_work);
INIT_LIST_HEAD(&amt->tunnel_list);
return 0;
err:
dev_put(amt->stream_dev);
@ -3280,7 +3415,7 @@ static int __init amt_init(void)
if (err < 0)
goto unregister_notifier;
amt_wq = alloc_workqueue("amt", WQ_UNBOUND, 1);
amt_wq = alloc_workqueue("amt", WQ_UNBOUND, 0);
if (!amt_wq) {
err = -ENOMEM;
goto rtnl_unregister;

Просмотреть файл

@ -1843,6 +1843,7 @@ static int rcar_canfd_probe(struct platform_device *pdev)
of_child = of_get_child_by_name(pdev->dev.of_node, name);
if (of_child && of_device_is_available(of_child))
channels_mask |= BIT(i);
of_node_put(of_child);
}
if (chip_id != RENESAS_RZG2L) {

Просмотреть файл

@ -1691,8 +1691,8 @@ static int mcp251xfd_register_chip_detect(struct mcp251xfd_priv *priv)
u32 osc;
int err;
/* The OSC_LPMEN is only supported on MCP2518FD, so use it to
* autodetect the model.
/* The OSC_LPMEN is only supported on MCP2518FD and MCP251863,
* so use it to autodetect the model.
*/
err = regmap_update_bits(priv->map_reg, MCP251XFD_REG_OSC,
MCP251XFD_REG_OSC_LPMEN,
@ -1704,10 +1704,18 @@ static int mcp251xfd_register_chip_detect(struct mcp251xfd_priv *priv)
if (err)
return err;
if (osc & MCP251XFD_REG_OSC_LPMEN)
devtype_data = &mcp251xfd_devtype_data_mcp2518fd;
if (osc & MCP251XFD_REG_OSC_LPMEN) {
/* We cannot distinguish between MCP2518FD and
* MCP251863. If firmware specifies MCP251863, keep
* it, otherwise set to MCP2518FD.
*/
if (mcp251xfd_is_251863(priv))
devtype_data = &mcp251xfd_devtype_data_mcp251863;
else
devtype_data = &mcp251xfd_devtype_data_mcp2518fd;
} else {
devtype_data = &mcp251xfd_devtype_data_mcp2517fd;
}
if (!mcp251xfd_is_251XFD(priv) &&
priv->devtype_data.model != devtype_data->model) {

Просмотреть файл

@ -1579,18 +1579,21 @@ int ksz_switch_register(struct ksz_device *dev)
ports = of_get_child_by_name(dev->dev->of_node, "ethernet-ports");
if (!ports)
ports = of_get_child_by_name(dev->dev->of_node, "ports");
if (ports)
if (ports) {
for_each_available_child_of_node(ports, port) {
if (of_property_read_u32(port, "reg",
&port_num))
continue;
if (!(dev->port_mask & BIT(port_num))) {
of_node_put(port);
of_node_put(ports);
return -EINVAL;
}
of_get_phy_mode(port,
&dev->ports[port_num].interface);
}
of_node_put(ports);
}
dev->synclko_125 = of_property_read_bool(dev->dev->of_node,
"microchip,synclko-125");
dev->synclko_disable = of_property_read_bool(dev->dev->of_node,

Просмотреть файл

@ -3382,12 +3382,28 @@ static const struct of_device_id sja1105_dt_ids[] = {
};
MODULE_DEVICE_TABLE(of, sja1105_dt_ids);
static const struct spi_device_id sja1105_spi_ids[] = {
{ "sja1105e" },
{ "sja1105t" },
{ "sja1105p" },
{ "sja1105q" },
{ "sja1105r" },
{ "sja1105s" },
{ "sja1110a" },
{ "sja1110b" },
{ "sja1110c" },
{ "sja1110d" },
{ },
};
MODULE_DEVICE_TABLE(spi, sja1105_spi_ids);
static struct spi_driver sja1105_driver = {
.driver = {
.name = "sja1105",
.owner = THIS_MODULE,
.of_match_table = of_match_ptr(sja1105_dt_ids),
},
.id_table = sja1105_spi_ids,
.probe = sja1105_probe,
.remove = sja1105_remove,
.shutdown = sja1105_shutdown,

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше