ARM: SoC-related driver updates

Various driver updates for platforms:
 
  - Nvidia: Fuse support for Tegra194, continued memory controller pieces
    for Tegra30
 
  - NXP/FSL: Refactorings of QuickEngine drivers to support ARM/ARM64/PPC
 
  - NXP/FSL: i.MX8MP SoC driver pieces
 
  - TI Keystone: ring accelerator driver
 
  - Qualcomm: SCM driver cleanup/refactoring + support for new SoCs.
 
  - Xilinx ZynqMP: feature checking interface for firmware. Mailbox
    communication for power management
 
  - Overall support patch set for cpuidle on more complex hierarchies
    (PSCI-based)
 
 + Misc cleanups, refactorings of Marvell, TI, other platforms.
 -----BEGIN PGP SIGNATURE-----
 
 iQJDBAABCAAtFiEElf+HevZ4QCAJmMQ+jBrnPN6EHHcFAl4+lTYPHG9sb2ZAbGl4
 b20ubmV0AAoJEIwa5zzehBx3nQcQAJm91+6hZbmMjlBySGS7ISjYvOcrI/hMgiOl
 uhhEP0Dcylvf9A9x3wcIbLwixe+2pvie9DQh2u5F80ShYimidtFi/2xCfuTb9fKu
 sxxKjrXWyVKhkpW0z+tedY08ftVhkwwcyD4m2C7uVl6AwTP7c367vFeU7XjF2APn
 drfgmgbjm8U3XbSyAqv+k6z6tyqaCnFM7vbPupSKHgHJ3mfByxOa+XyBN2RdgBbs
 0KrVfbXGv80zFIFrMPwaWG7G52bu7K68nVdgy44MpKdRZ6QTjhnR+kerFxHsYgV4
 bM55Fya52nTCSTGdKaQakDtKwbAUdCDTSkxgOHGcQoyFi0R/VaEUJtcysnvLbI6c
 +n/yFIzGyEdXcvIzfv2SoDYhogw19I6RR/M9K5Ni29eazkDVYx2z3rI+2QYeqCiF
 u7cq52gW6JLP0SI/9kuUrRFiR8v19Ixap7qokAxgqQwYB3NzT8a7WsYPkzdpDZGQ
 ETSDFMyBWT6UvBe/HWkQluBabbet53rG8BF0OHFrQuMK0u/ieKgSGuTB9XN2djEW
 PHMOMz2vhi+8XTfpkskhF2tTxlA/k4R6QwCdIMpIkMRVnVQCh1XdPr3Fi2NrgB+S
 kIXHD4vV6zLYh04zHyKewSPHAXWgraFpg2qKnvL5+KWMTnW6QH+RNjOt9xKDNXOd
 +iDXpOad
 =ONtb
 -----END PGP SIGNATURE-----

Merge tag 'armsoc-drivers' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc

Pull ARM SoC-related driver updates from Olof Johansson:
 "Various driver updates for platforms:

   - Nvidia: Fuse support for Tegra194, continued memory controller
     pieces for Tegra30

   - NXP/FSL: Refactorings of QuickEngine drivers to support
     ARM/ARM64/PPC

   - NXP/FSL: i.MX8MP SoC driver pieces

   - TI Keystone: ring accelerator driver

   - Qualcomm: SCM driver cleanup/refactoring + support for new SoCs.

   - Xilinx ZynqMP: feature checking interface for firmware. Mailbox
     communication for power management

   - Overall support patch set for cpuidle on more complex hierarchies
     (PSCI-based)

  and misc cleanups, refactorings of Marvell, TI, other platforms"

* tag 'armsoc-drivers' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc: (166 commits)
  drivers: soc: xilinx: Use mailbox IPI callback
  dt-bindings: power: reset: xilinx: Add bindings for ipi mailbox
  drivers: soc: ti: knav_qmss_queue: Pass lockdep expression to RCU lists
  MAINTAINERS: Add brcmstb PCIe controller entry
  soc/tegra: fuse: Unmap registers once they are not needed anymore
  soc/tegra: fuse: Correct straps' address for older Tegra124 device trees
  soc/tegra: fuse: Warn if straps are not ready
  soc/tegra: fuse: Cache values of straps and Chip ID registers
  memory: tegra30-emc: Correct error message for timed out auto calibration
  memory: tegra30-emc: Firm up hardware programming sequence
  memory: tegra30-emc: Firm up suspend/resume sequence
  soc/tegra: regulators: Do nothing if voltage is unchanged
  memory: tegra: Correct reset value of xusb_hostr
  soc/tegra: fuse: Add APB DMA dependency for Tegra20
  bus: tegra-aconnect: Remove PM_CLK dependency
  dt-bindings: mediatek: add MT6765 power dt-bindings
  soc: mediatek: cmdq: delete not used define
  memory: tegra: Add support for the Tegra194 memory controller
  memory: tegra: Only include support for enabled SoCs
  memory: tegra: Support DVFS on Tegra186 and later
  ...
This commit is contained in:
Linus Torvalds 2020-02-08 14:04:19 -08:00
Родитель 1afa9c3b7c 88b4750151
Коммит eab3540562
142 изменённых файлов: 6592 добавлений и 3289 удалений

Просмотреть файл

@ -242,6 +242,21 @@ properties:
where voltage is in V, frequency is in MHz.
power-domains:
$ref: '/schemas/types.yaml#/definitions/phandle-array'
description:
List of phandles and PM domain specifiers, as defined by bindings of the
PM domain provider (see also ../power_domain.txt).
power-domain-names:
$ref: '/schemas/types.yaml#/definitions/string-array'
description:
A list of power domain name strings sorted in the same order as the
power-domains property.
For PSCI based platforms, the name corresponding to the index of the PSCI
PM domain provider, must be "psci".
qcom,saw:
$ref: '/schemas/types.yaml#/definitions/phandle'
description: |

Просмотреть файл

@ -47,7 +47,7 @@ examples:
- |
#include <dt-bindings/interrupt-controller/arm-gic.h>
cache-controller@1100000 {
system-cache-controller@1100000 {
compatible = "qcom,sdm845-llcc";
reg = <0x1100000 0x200000>, <0x1300000 0x50000> ;
reg-names = "llcc_base", "llcc_broadcast_base";

Просмотреть файл

@ -102,6 +102,34 @@ properties:
[1] Kernel documentation - ARM idle states bindings
Documentation/devicetree/bindings/arm/idle-states.txt
"#power-domain-cells":
description:
The number of cells in a PM domain specifier as per binding in [3].
Must be 0 as to represent a single PM domain.
ARM systems can have multiple cores, sometimes in an hierarchical
arrangement. This often, but not always, maps directly to the processor
power topology of the system. Individual nodes in a topology have their
own specific power states and can be better represented hierarchically.
For these cases, the definitions of the idle states for the CPUs and the
CPU topology, must conform to the binding in [3]. The idle states
themselves must conform to the binding in [4] and must specify the
arm,psci-suspend-param property.
It should also be noted that, in PSCI firmware v1.0 the OS-Initiated
(OSI) CPU suspend mode is introduced. Using a hierarchical representation
helps to implement support for OSI mode and OS implementations may choose
to mandate it.
[3] Documentation/devicetree/bindings/power/power_domain.txt
[4] Documentation/devicetree/bindings/power/domain-idle-state.txt
power-domains:
$ref: '/schemas/types.yaml#/definitions/phandle-array'
description:
List of phandles and PM domain specifiers, as defined by bindings of the
PM domain provider.
required:
- compatible
@ -160,4 +188,80 @@ examples:
cpu_on = <0x95c10002>;
cpu_off = <0x95c10001>;
};
- |+
// Case 4: CPUs and CPU idle states described using the hierarchical model.
cpus {
#size-cells = <0>;
#address-cells = <1>;
CPU0: cpu@0 {
device_type = "cpu";
compatible = "arm,cortex-a53", "arm,armv8";
reg = <0x0>;
enable-method = "psci";
power-domains = <&CPU_PD0>;
power-domain-names = "psci";
};
CPU1: cpu@1 {
device_type = "cpu";
compatible = "arm,cortex-a57", "arm,armv8";
reg = <0x100>;
enable-method = "psci";
power-domains = <&CPU_PD1>;
power-domain-names = "psci";
};
idle-states {
CPU_PWRDN: cpu-power-down {
compatible = "arm,idle-state";
arm,psci-suspend-param = <0x0000001>;
entry-latency-us = <10>;
exit-latency-us = <10>;
min-residency-us = <100>;
};
CLUSTER_RET: cluster-retention {
compatible = "domain-idle-state";
arm,psci-suspend-param = <0x1000011>;
entry-latency-us = <500>;
exit-latency-us = <500>;
min-residency-us = <2000>;
};
CLUSTER_PWRDN: cluster-power-down {
compatible = "domain-idle-state";
arm,psci-suspend-param = <0x1000031>;
entry-latency-us = <2000>;
exit-latency-us = <2000>;
min-residency-us = <6000>;
};
};
};
psci {
compatible = "arm,psci-1.0";
method = "smc";
CPU_PD0: cpu-pd0 {
#power-domain-cells = <0>;
domain-idle-states = <&CPU_PWRDN>;
power-domains = <&CLUSTER_PD>;
};
CPU_PD1: cpu-pd1 {
#power-domain-cells = <0>;
domain-idle-states = <&CPU_PWRDN>;
power-domains = <&CLUSTER_PD>;
};
CLUSTER_PD: cluster-pd {
#power-domain-cells = <0>;
domain-idle-states = <&CLUSTER_RET>, <&CLUSTER_PWRDN>;
};
};
...

Просмотреть файл

@ -1,148 +0,0 @@
Qualcomm RPM/RPMh Power domains
For RPM/RPMh Power domains, we communicate a performance state to RPM/RPMh
which then translates it into a corresponding voltage on a rail
Required Properties:
- compatible: Should be one of the following
* qcom,msm8976-rpmpd: RPM Power domain for the msm8976 family of SoC
* qcom,msm8996-rpmpd: RPM Power domain for the msm8996 family of SoC
* qcom,msm8998-rpmpd: RPM Power domain for the msm8998 family of SoC
* qcom,qcs404-rpmpd: RPM Power domain for the qcs404 family of SoC
* qcom,sdm845-rpmhpd: RPMh Power domain for the sdm845 family of SoC
- #power-domain-cells: number of cells in Power domain specifier
must be 1.
- operating-points-v2: Phandle to the OPP table for the Power domain.
Refer to Documentation/devicetree/bindings/power/power_domain.txt
and Documentation/devicetree/bindings/opp/opp.txt for more details
Refer to <dt-bindings/power/qcom-rpmpd.h> for the level values for
various OPPs for different platforms as well as Power domain indexes
Example: rpmh power domain controller and OPP table
#include <dt-bindings/power/qcom-rpmhpd.h>
opp-level values specified in the OPP tables for RPMh power domains
should use the RPMH_REGULATOR_LEVEL_* constants from
<dt-bindings/power/qcom-rpmhpd.h>
rpmhpd: power-controller {
compatible = "qcom,sdm845-rpmhpd";
#power-domain-cells = <1>;
operating-points-v2 = <&rpmhpd_opp_table>;
rpmhpd_opp_table: opp-table {
compatible = "operating-points-v2";
rpmhpd_opp_ret: opp1 {
opp-level = <RPMH_REGULATOR_LEVEL_RETENTION>;
};
rpmhpd_opp_min_svs: opp2 {
opp-level = <RPMH_REGULATOR_LEVEL_MIN_SVS>;
};
rpmhpd_opp_low_svs: opp3 {
opp-level = <RPMH_REGULATOR_LEVEL_LOW_SVS>;
};
rpmhpd_opp_svs: opp4 {
opp-level = <RPMH_REGULATOR_LEVEL_SVS>;
};
rpmhpd_opp_svs_l1: opp5 {
opp-level = <RPMH_REGULATOR_LEVEL_SVS_L1>;
};
rpmhpd_opp_nom: opp6 {
opp-level = <RPMH_REGULATOR_LEVEL_NOM>;
};
rpmhpd_opp_nom_l1: opp7 {
opp-level = <RPMH_REGULATOR_LEVEL_NOM_L1>;
};
rpmhpd_opp_nom_l2: opp8 {
opp-level = <RPMH_REGULATOR_LEVEL_NOM_L2>;
};
rpmhpd_opp_turbo: opp9 {
opp-level = <RPMH_REGULATOR_LEVEL_TURBO>;
};
rpmhpd_opp_turbo_l1: opp10 {
opp-level = <RPMH_REGULATOR_LEVEL_TURBO_L1>;
};
};
};
Example: rpm power domain controller and OPP table
rpmpd: power-controller {
compatible = "qcom,msm8996-rpmpd";
#power-domain-cells = <1>;
operating-points-v2 = <&rpmpd_opp_table>;
rpmpd_opp_table: opp-table {
compatible = "operating-points-v2";
rpmpd_opp_low: opp1 {
opp-level = <1>;
};
rpmpd_opp_ret: opp2 {
opp-level = <2>;
};
rpmpd_opp_svs: opp3 {
opp-level = <3>;
};
rpmpd_opp_normal: opp4 {
opp-level = <4>;
};
rpmpd_opp_high: opp5 {
opp-level = <5>;
};
rpmpd_opp_turbo: opp6 {
opp-level = <6>;
};
};
};
Example: Client/Consumer device using OPP table
leaky-device0@12350000 {
compatible = "foo,i-leak-current";
reg = <0x12350000 0x1000>;
power-domains = <&rpmhpd SDM845_MX>;
operating-points-v2 = <&leaky_opp_table>;
};
leaky_opp_table: opp-table {
compatible = "operating-points-v2";
opp1 {
opp-hz = /bits/ 64 <144000>;
required-opps = <&rpmhpd_opp_low>;
};
opp2 {
opp-hz = /bits/ 64 <400000>;
required-opps = <&rpmhpd_opp_ret>;
};
opp3 {
opp-hz = /bits/ 64 <20000000>;
required-opps = <&rpmpd_opp_svs>;
};
opp4 {
opp-hz = /bits/ 64 <25000000>;
required-opps = <&rpmpd_opp_normal>;
};
};

Просмотреть файл

@ -0,0 +1,170 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/power/qcom,rpmpd.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Qualcomm RPM/RPMh Power domains
maintainers:
- Rajendra Nayak <rnayak@codeaurora.org>
description:
For RPM/RPMh Power domains, we communicate a performance state to RPM/RPMh
which then translates it into a corresponding voltage on a rail.
properties:
compatible:
enum:
- qcom,msm8976-rpmpd
- qcom,msm8996-rpmpd
- qcom,msm8998-rpmpd
- qcom,qcs404-rpmpd
- qcom,sc7180-rpmhpd
- qcom,sdm845-rpmhpd
- qcom,sm8150-rpmhpd
'#power-domain-cells':
const: 1
operating-points-v2: true
opp-table:
type: object
required:
- compatible
- '#power-domain-cells'
- operating-points-v2
additionalProperties: false
examples:
- |
// Example 1 (rpmh power domain controller and OPP table):
#include <dt-bindings/power/qcom-rpmpd.h>
rpmhpd: power-controller {
compatible = "qcom,sdm845-rpmhpd";
#power-domain-cells = <1>;
operating-points-v2 = <&rpmhpd_opp_table>;
rpmhpd_opp_table: opp-table {
compatible = "operating-points-v2";
rpmhpd_opp_ret: opp1 {
opp-level = <RPMH_REGULATOR_LEVEL_RETENTION>;
};
rpmhpd_opp_min_svs: opp2 {
opp-level = <RPMH_REGULATOR_LEVEL_MIN_SVS>;
};
rpmhpd_opp_low_svs: opp3 {
opp-level = <RPMH_REGULATOR_LEVEL_LOW_SVS>;
};
rpmhpd_opp_svs: opp4 {
opp-level = <RPMH_REGULATOR_LEVEL_SVS>;
};
rpmhpd_opp_svs_l1: opp5 {
opp-level = <RPMH_REGULATOR_LEVEL_SVS_L1>;
};
rpmhpd_opp_nom: opp6 {
opp-level = <RPMH_REGULATOR_LEVEL_NOM>;
};
rpmhpd_opp_nom_l1: opp7 {
opp-level = <RPMH_REGULATOR_LEVEL_NOM_L1>;
};
rpmhpd_opp_nom_l2: opp8 {
opp-level = <RPMH_REGULATOR_LEVEL_NOM_L2>;
};
rpmhpd_opp_turbo: opp9 {
opp-level = <RPMH_REGULATOR_LEVEL_TURBO>;
};
rpmhpd_opp_turbo_l1: opp10 {
opp-level = <RPMH_REGULATOR_LEVEL_TURBO_L1>;
};
};
};
- |
// Example 2 (rpm power domain controller and OPP table):
rpmpd: power-controller {
compatible = "qcom,msm8996-rpmpd";
#power-domain-cells = <1>;
operating-points-v2 = <&rpmpd_opp_table>;
rpmpd_opp_table: opp-table {
compatible = "operating-points-v2";
rpmpd_opp_low: opp1 {
opp-level = <1>;
};
rpmpd_opp_ret: opp2 {
opp-level = <2>;
};
rpmpd_opp_svs: opp3 {
opp-level = <3>;
};
rpmpd_opp_normal: opp4 {
opp-level = <4>;
};
rpmpd_opp_high: opp5 {
opp-level = <5>;
};
rpmpd_opp_turbo: opp6 {
opp-level = <6>;
};
};
};
- |
// Example 3 (Client/Consumer device using OPP table):
leaky-device0@12350000 {
compatible = "foo,i-leak-current";
reg = <0x12350000 0x1000>;
power-domains = <&rpmhpd 0>;
operating-points-v2 = <&leaky_opp_table>;
};
leaky_opp_table: opp-table {
compatible = "operating-points-v2";
opp1 {
opp-hz = /bits/ 64 <144000>;
required-opps = <&rpmhpd_opp_low>;
};
opp2 {
opp-hz = /bits/ 64 <400000>;
required-opps = <&rpmhpd_opp_ret>;
};
opp3 {
opp-hz = /bits/ 64 <20000000>;
required-opps = <&rpmpd_opp_svs>;
};
opp4 {
opp-hz = /bits/ 64 <25000000>;
required-opps = <&rpmpd_opp_normal>;
};
};
...

Просмотреть файл

@ -8,9 +8,27 @@ Required properties:
- compatible: Must contain: "xlnx,zynqmp-power"
- interrupts: Interrupt specifier
-------
Example
-------
Optional properties:
- mbox-names : Name given to channels seen in the 'mboxes' property.
"tx" - Mailbox corresponding to transmit path
"rx" - Mailbox corresponding to receive path
- mboxes : Standard property to specify a Mailbox. Each value of
the mboxes property should contain a phandle to the
mailbox controller device node and an args specifier
that will be the phandle to the intended sub-mailbox
child node to be used for communication. See
Documentation/devicetree/bindings/mailbox/mailbox.txt
for more details about the generic mailbox controller
and client driver bindings. Also see
Documentation/devicetree/bindings/mailbox/ \
xlnx,zynqmp-ipi-mailbox.txt for typical controller that
is used to communicate with this System controllers.
--------
Examples
--------
Example with interrupt method:
firmware {
zynqmp_firmware: zynqmp-firmware {
@ -23,3 +41,21 @@ firmware {
};
};
};
Example with IPI mailbox method:
firmware {
zynqmp_firmware: zynqmp-firmware {
compatible = "xlnx,zynqmp-firmware";
method = "smc";
zynqmp_power: zynqmp-power {
compatible = "xlnx,zynqmp-power";
interrupt-parent = <&gic>;
interrupts = <0 35 4>;
mboxes = <&ipi_mailbox_pmu0 0>,
<&ipi_mailbox_pmu0 1>;
mbox-names = "tx", "rx";
};
};
};

Просмотреть файл

@ -0,0 +1,37 @@
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
# Copyright 2020 Broadcom
%YAML 1.2
---
$id: "http://devicetree.org/schemas/reset/brcm,bcm7216-pcie-sata-rescal.yaml#"
$schema: "http://devicetree.org/meta-schemas/core.yaml#"
title: BCM7216 RESCAL reset controller
description: This document describes the BCM7216 RESCAL reset controller which is responsible for controlling the reset of the SATA and PCIe0/1 instances on BCM7216.
maintainers:
- Florian Fainelli <f.fainelli@gmail.com>
- Jim Quinlan <jim2101024@gmail.com>
properties:
compatible:
const: brcm,bcm7216-pcie-sata-rescal
reg:
maxItems: 1
"#reset-cells":
const: 0
required:
- compatible
- reg
- "#reset-cells"
examples:
- |
reset-controller@8b2c800 {
compatible = "brcm,bcm7216-pcie-sata-rescal";
reg = <0x8b2c800 0x10>;
#reset-cells = <0>;
};

Просмотреть файл

@ -0,0 +1,63 @@
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/reset/intel,rcu-gw.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: System Reset Controller on Intel Gateway SoCs
maintainers:
- Dilip Kota <eswara.kota@linux.intel.com>
properties:
compatible:
enum:
- intel,rcu-lgm
- intel,rcu-xrx200
reg:
description: Reset controller registers.
maxItems: 1
intel,global-reset:
description: Global reset register offset and bit offset.
allOf:
- $ref: /schemas/types.yaml#/definitions/uint32-array
- maxItems: 2
"#reset-cells":
minimum: 2
maximum: 3
description: |
First cell is reset request register offset.
Second cell is bit offset in reset request register.
Third cell is bit offset in reset status register.
For LGM SoC, reset cell count is 2 as bit offset in
reset request and reset status registers is same. Whereas
3 for legacy SoCs as bit offset differs.
required:
- compatible
- reg
- intel,global-reset
- "#reset-cells"
additionalProperties: false
examples:
- |
rcu0: reset-controller@e0000000 {
compatible = "intel,rcu-lgm";
reg = <0xe0000000 0x20000>;
intel,global-reset = <0x10 30>;
#reset-cells = <2>;
};
pwm: pwm@e0d00000 {
status = "disabled";
compatible = "intel,lgm-pwm";
reg = <0xe0d00000 0x30>;
clocks = <&cgu0 1>;
#pwm-cells = <2>;
resets = <&rcu0 0x30 21>;
};

Просмотреть файл

@ -0,0 +1,32 @@
Nuvoton NPCM Reset controller
Required properties:
- compatible : "nuvoton,npcm750-reset" for NPCM7XX BMC
- reg : specifies physical base address and size of the register.
- #reset-cells: must be set to 2
Optional property:
- nuvoton,sw-reset-number - Contains the software reset number to restart the SoC.
NPCM7xx contain four software reset that represent numbers 1 to 4.
If 'nuvoton,sw-reset-number' is not specfied software reset is disabled.
Example:
rstc: rstc@f0801000 {
compatible = "nuvoton,npcm750-reset";
reg = <0xf0801000 0x70>;
#reset-cells = <2>;
nuvoton,sw-reset-number = <2>;
};
Specifying reset lines connected to IP NPCM7XX modules
======================================================
example:
spi0: spi@..... {
...
resets = <&rstc NPCM7XX_RESET_IPSRST2 NPCM7XX_RESET_PSPI1>;
...
};
The index could be found in <dt-bindings/reset/nuvoton,npcm7xx-reset.h>.

Просмотреть файл

@ -11,6 +11,7 @@ The driver implements the Generic PM domain bindings described in
power/power-domain.yaml. It provides the power domains defined in
- include/dt-bindings/power/mt8173-power.h
- include/dt-bindings/power/mt6797-power.h
- include/dt-bindings/power/mt6765-power.h
- include/dt-bindings/power/mt2701-power.h
- include/dt-bindings/power/mt2712-power.h
- include/dt-bindings/power/mt7622-power.h
@ -19,6 +20,7 @@ Required properties:
- compatible: Should be one of:
- "mediatek,mt2701-scpsys"
- "mediatek,mt2712-scpsys"
- "mediatek,mt6765-scpsys"
- "mediatek,mt6797-scpsys"
- "mediatek,mt7622-scpsys"
- "mediatek,mt7623-scpsys", "mediatek,mt2701-scpsys": For MT7623 SoC
@ -33,6 +35,10 @@ Required properties:
enabled before enabling certain power domains.
Required clocks for MT2701 or MT7623: "mm", "mfg", "ethif"
Required clocks for MT2712: "mm", "mfg", "venc", "jpgdec", "audio", "vdec"
Required clocks for MT6765: MUX: "mm", "mfg"
CG: "mm-0", "mm-1", "mm-2", "mm-3", "isp-0",
"isp-1", "cam-0", "cam-1", "cam-2",
"cam-3","cam-4"
Required clocks for MT6797: "mm", "mfg", "vdec"
Required clocks for MT7622 or MT7629: "hif_sel"
Required clocks for MT7623A: "ethif"

Просмотреть файл

@ -3289,6 +3289,8 @@ S: Maintained
N: bcm2711
N: bcm2835
F: drivers/staging/vc04_services
F: Documentation/devicetree/bindings/pci/brcm,stb-pcie.yaml
F: drivers/pci/controller/pcie-brcmstb.c
BROADCOM BCM47XX MIPS ARCHITECTURE
M: Hauke Mehrtens <hauke@hauke-m.de>
@ -3344,6 +3346,8 @@ F: drivers/bus/brcmstb_gisb.c
F: arch/arm/mm/cache-b15-rac.c
F: arch/arm/include/asm/hardware/cache-b15-rac.h
N: brcmstb
F: Documentation/devicetree/bindings/pci/brcm,stb-pcie.yaml
F: drivers/pci/controller/pcie-brcmstb.c
BROADCOM BMIPS CPUFREQ DRIVER
M: Markus Mayer <mmayer@broadcom.com>
@ -16148,6 +16152,7 @@ F: drivers/firmware/arm_scpi.c
F: drivers/firmware/arm_scmi/
F: drivers/reset/reset-scmi.c
F: include/linux/sc[mp]i_protocol.h
F: include/trace/events/scmi.h
SYSTEM RESET/SHUTDOWN DRIVERS
M: Sebastian Reichel <sre@kernel.org>

Просмотреть файл

@ -102,10 +102,11 @@
reg = <0x0>;
next-level-cache = <&L2_0>;
enable-method = "psci";
cpu-idle-states = <&CPU_SLEEP_0>;
clocks = <&apcs>;
operating-points-v2 = <&cpu_opp_table>;
#cooling-cells = <2>;
power-domains = <&CPU_PD0>;
power-domain-names = "psci";
};
CPU1: cpu@1 {
@ -114,10 +115,11 @@
reg = <0x1>;
next-level-cache = <&L2_0>;
enable-method = "psci";
cpu-idle-states = <&CPU_SLEEP_0>;
clocks = <&apcs>;
operating-points-v2 = <&cpu_opp_table>;
#cooling-cells = <2>;
power-domains = <&CPU_PD1>;
power-domain-names = "psci";
};
CPU2: cpu@2 {
@ -126,10 +128,11 @@
reg = <0x2>;
next-level-cache = <&L2_0>;
enable-method = "psci";
cpu-idle-states = <&CPU_SLEEP_0>;
clocks = <&apcs>;
operating-points-v2 = <&cpu_opp_table>;
#cooling-cells = <2>;
power-domains = <&CPU_PD2>;
power-domain-names = "psci";
};
CPU3: cpu@3 {
@ -138,10 +141,11 @@
reg = <0x3>;
next-level-cache = <&L2_0>;
enable-method = "psci";
cpu-idle-states = <&CPU_SLEEP_0>;
clocks = <&apcs>;
operating-points-v2 = <&cpu_opp_table>;
#cooling-cells = <2>;
power-domains = <&CPU_PD3>;
power-domain-names = "psci";
};
L2_0: l2-cache {
@ -161,12 +165,57 @@
min-residency-us = <2000>;
local-timer-stop;
};
CLUSTER_RET: cluster-retention {
compatible = "domain-idle-state";
arm,psci-suspend-param = <0x41000012>;
entry-latency-us = <500>;
exit-latency-us = <500>;
min-residency-us = <2000>;
};
CLUSTER_PWRDN: cluster-gdhs {
compatible = "domain-idle-state";
arm,psci-suspend-param = <0x41000032>;
entry-latency-us = <2000>;
exit-latency-us = <2000>;
min-residency-us = <6000>;
};
};
};
psci {
compatible = "arm,psci-1.0";
method = "smc";
CPU_PD0: cpu-pd0 {
#power-domain-cells = <0>;
power-domains = <&CLUSTER_PD>;
domain-idle-states = <&CPU_SLEEP_0>;
};
CPU_PD1: cpu-pd1 {
#power-domain-cells = <0>;
power-domains = <&CLUSTER_PD>;
domain-idle-states = <&CPU_SLEEP_0>;
};
CPU_PD2: cpu-pd2 {
#power-domain-cells = <0>;
power-domains = <&CLUSTER_PD>;
domain-idle-states = <&CPU_SLEEP_0>;
};
CPU_PD3: cpu-pd3 {
#power-domain-cells = <0>;
power-domains = <&CLUSTER_PD>;
domain-idle-states = <&CPU_SLEEP_0>;
};
CLUSTER_PD: cluster-pd {
#power-domain-cells = <0>;
domain-idle-states = <&CLUSTER_RET>, <&CLUSTER_PWRDN>;
};
};
pmu {

Просмотреть файл

@ -1,171 +1 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef __CPM_H
#define __CPM_H
#include <linux/compiler.h>
#include <linux/types.h>
#include <linux/errno.h>
#include <linux/of.h>
#include <soc/fsl/qe/qe.h>
/*
* SPI Parameter RAM common to QE and CPM.
*/
struct spi_pram {
__be16 rbase; /* Rx Buffer descriptor base address */
__be16 tbase; /* Tx Buffer descriptor base address */
u8 rfcr; /* Rx function code */
u8 tfcr; /* Tx function code */
__be16 mrblr; /* Max receive buffer length */
__be32 rstate; /* Internal */
__be32 rdp; /* Internal */
__be16 rbptr; /* Internal */
__be16 rbc; /* Internal */
__be32 rxtmp; /* Internal */
__be32 tstate; /* Internal */
__be32 tdp; /* Internal */
__be16 tbptr; /* Internal */
__be16 tbc; /* Internal */
__be32 txtmp; /* Internal */
__be32 res; /* Tx temp. */
__be16 rpbase; /* Relocation pointer (CPM1 only) */
__be16 res1; /* Reserved */
};
/*
* USB Controller pram common to QE and CPM.
*/
struct usb_ctlr {
u8 usb_usmod;
u8 usb_usadr;
u8 usb_uscom;
u8 res1[1];
__be16 usb_usep[4];
u8 res2[4];
__be16 usb_usber;
u8 res3[2];
__be16 usb_usbmr;
u8 res4[1];
u8 usb_usbs;
/* Fields down below are QE-only */
__be16 usb_ussft;
u8 res5[2];
__be16 usb_usfrn;
u8 res6[0x22];
} __attribute__ ((packed));
/*
* Function code bits, usually generic to devices.
*/
#ifdef CONFIG_CPM1
#define CPMFCR_GBL ((u_char)0x00) /* Flag doesn't exist in CPM1 */
#define CPMFCR_TC2 ((u_char)0x00) /* Flag doesn't exist in CPM1 */
#define CPMFCR_DTB ((u_char)0x00) /* Flag doesn't exist in CPM1 */
#define CPMFCR_BDB ((u_char)0x00) /* Flag doesn't exist in CPM1 */
#else
#define CPMFCR_GBL ((u_char)0x20) /* Set memory snooping */
#define CPMFCR_TC2 ((u_char)0x04) /* Transfer code 2 value */
#define CPMFCR_DTB ((u_char)0x02) /* Use local bus for data when set */
#define CPMFCR_BDB ((u_char)0x01) /* Use local bus for BD when set */
#endif
#define CPMFCR_EB ((u_char)0x10) /* Set big endian byte order */
/* Opcodes common to CPM1 and CPM2
*/
#define CPM_CR_INIT_TRX ((ushort)0x0000)
#define CPM_CR_INIT_RX ((ushort)0x0001)
#define CPM_CR_INIT_TX ((ushort)0x0002)
#define CPM_CR_HUNT_MODE ((ushort)0x0003)
#define CPM_CR_STOP_TX ((ushort)0x0004)
#define CPM_CR_GRA_STOP_TX ((ushort)0x0005)
#define CPM_CR_RESTART_TX ((ushort)0x0006)
#define CPM_CR_CLOSE_RX_BD ((ushort)0x0007)
#define CPM_CR_SET_GADDR ((ushort)0x0008)
#define CPM_CR_SET_TIMER ((ushort)0x0008)
#define CPM_CR_STOP_IDMA ((ushort)0x000b)
/* Buffer descriptors used by many of the CPM protocols. */
typedef struct cpm_buf_desc {
ushort cbd_sc; /* Status and Control */
ushort cbd_datlen; /* Data length in buffer */
uint cbd_bufaddr; /* Buffer address in host memory */
} cbd_t;
/* Buffer descriptor control/status used by serial
*/
#define BD_SC_EMPTY (0x8000) /* Receive is empty */
#define BD_SC_READY (0x8000) /* Transmit is ready */
#define BD_SC_WRAP (0x2000) /* Last buffer descriptor */
#define BD_SC_INTRPT (0x1000) /* Interrupt on change */
#define BD_SC_LAST (0x0800) /* Last buffer in frame */
#define BD_SC_TC (0x0400) /* Transmit CRC */
#define BD_SC_CM (0x0200) /* Continuous mode */
#define BD_SC_ID (0x0100) /* Rec'd too many idles */
#define BD_SC_P (0x0100) /* xmt preamble */
#define BD_SC_BR (0x0020) /* Break received */
#define BD_SC_FR (0x0010) /* Framing error */
#define BD_SC_PR (0x0008) /* Parity error */
#define BD_SC_NAK (0x0004) /* NAK - did not respond */
#define BD_SC_OV (0x0002) /* Overrun */
#define BD_SC_UN (0x0002) /* Underrun */
#define BD_SC_CD (0x0001) /* */
#define BD_SC_CL (0x0001) /* Collision */
/* Buffer descriptor control/status used by Ethernet receive.
* Common to SCC and FCC.
*/
#define BD_ENET_RX_EMPTY (0x8000)
#define BD_ENET_RX_WRAP (0x2000)
#define BD_ENET_RX_INTR (0x1000)
#define BD_ENET_RX_LAST (0x0800)
#define BD_ENET_RX_FIRST (0x0400)
#define BD_ENET_RX_MISS (0x0100)
#define BD_ENET_RX_BC (0x0080) /* FCC Only */
#define BD_ENET_RX_MC (0x0040) /* FCC Only */
#define BD_ENET_RX_LG (0x0020)
#define BD_ENET_RX_NO (0x0010)
#define BD_ENET_RX_SH (0x0008)
#define BD_ENET_RX_CR (0x0004)
#define BD_ENET_RX_OV (0x0002)
#define BD_ENET_RX_CL (0x0001)
#define BD_ENET_RX_STATS (0x01ff) /* All status bits */
/* Buffer descriptor control/status used by Ethernet transmit.
* Common to SCC and FCC.
*/
#define BD_ENET_TX_READY (0x8000)
#define BD_ENET_TX_PAD (0x4000)
#define BD_ENET_TX_WRAP (0x2000)
#define BD_ENET_TX_INTR (0x1000)
#define BD_ENET_TX_LAST (0x0800)
#define BD_ENET_TX_TC (0x0400)
#define BD_ENET_TX_DEF (0x0200)
#define BD_ENET_TX_HB (0x0100)
#define BD_ENET_TX_LC (0x0080)
#define BD_ENET_TX_RL (0x0040)
#define BD_ENET_TX_RCMASK (0x003c)
#define BD_ENET_TX_UN (0x0002)
#define BD_ENET_TX_CSL (0x0001)
#define BD_ENET_TX_STATS (0x03ff) /* All status bits */
/* Buffer descriptor control/status used by Transparent mode SCC.
*/
#define BD_SCC_TX_LAST (0x0800)
/* Buffer descriptor control/status used by I2C.
*/
#define BD_I2C_START (0x0400)
#ifdef CONFIG_CPM
int cpm_command(u32 command, u8 opcode);
#else
static inline int cpm_command(u32 command, u8 opcode)
{
return -ENOSYS;
}
#endif /* CONFIG_CPM */
int cpm2_gpiochip_add32(struct device *dev);
#endif
#include <soc/fsl/cpm.h>

Просмотреть файл

@ -34,7 +34,6 @@
#include <sysdev/fsl_soc.h>
#include <sysdev/fsl_pci.h>
#include <soc/fsl/qe/qe.h>
#include <soc/fsl/qe/qe_ic.h>
#include "mpc83xx.h"
@ -178,7 +177,7 @@ define_machine(mpc83xx_km) {
.name = "mpc83xx-km-platform",
.probe = mpc83xx_km_probe,
.setup_arch = mpc83xx_km_setup_arch,
.init_IRQ = mpc83xx_ipic_and_qe_init_IRQ,
.init_IRQ = mpc83xx_ipic_init_IRQ,
.get_irq = ipic_get_irq,
.restart = mpc83xx_restart,
.time_init = mpc83xx_time_init,

Просмотреть файл

@ -14,7 +14,6 @@
#include <asm/io.h>
#include <asm/hw_irq.h>
#include <asm/ipic.h>
#include <soc/fsl/qe/qe_ic.h>
#include <sysdev/fsl_soc.h>
#include <sysdev/fsl_pci.h>
@ -91,28 +90,6 @@ void __init mpc83xx_ipic_init_IRQ(void)
ipic_set_default_priority();
}
#ifdef CONFIG_QUICC_ENGINE
void __init mpc83xx_qe_init_IRQ(void)
{
struct device_node *np;
np = of_find_compatible_node(NULL, NULL, "fsl,qe-ic");
if (!np) {
np = of_find_node_by_type(NULL, "qeic");
if (!np)
return;
}
qe_ic_init(np, 0, qe_ic_cascade_low_ipic, qe_ic_cascade_high_ipic);
of_node_put(np);
}
void __init mpc83xx_ipic_and_qe_init_IRQ(void)
{
mpc83xx_ipic_init_IRQ();
mpc83xx_qe_init_IRQ();
}
#endif /* CONFIG_QUICC_ENGINE */
static const struct of_device_id of_bus_ids[] __initconst = {
{ .type = "soc", },
{ .compatible = "soc", },

Просмотреть файл

@ -33,7 +33,6 @@
#include <sysdev/fsl_soc.h>
#include <sysdev/fsl_pci.h>
#include <soc/fsl/qe/qe.h>
#include <soc/fsl/qe/qe_ic.h>
#include "mpc83xx.h"
@ -102,7 +101,7 @@ define_machine(mpc832x_mds) {
.name = "MPC832x MDS",
.probe = mpc832x_sys_probe,
.setup_arch = mpc832x_sys_setup_arch,
.init_IRQ = mpc83xx_ipic_and_qe_init_IRQ,
.init_IRQ = mpc83xx_ipic_init_IRQ,
.get_irq = ipic_get_irq,
.restart = mpc83xx_restart,
.time_init = mpc83xx_time_init,

Просмотреть файл

@ -22,7 +22,6 @@
#include <asm/ipic.h>
#include <asm/udbg.h>
#include <soc/fsl/qe/qe.h>
#include <soc/fsl/qe/qe_ic.h>
#include <sysdev/fsl_soc.h>
#include <sysdev/fsl_pci.h>
@ -220,7 +219,7 @@ define_machine(mpc832x_rdb) {
.name = "MPC832x RDB",
.probe = mpc832x_rdb_probe,
.setup_arch = mpc832x_rdb_setup_arch,
.init_IRQ = mpc83xx_ipic_and_qe_init_IRQ,
.init_IRQ = mpc83xx_ipic_init_IRQ,
.get_irq = ipic_get_irq,
.restart = mpc83xx_restart,
.time_init = mpc83xx_time_init,

Просмотреть файл

@ -40,7 +40,6 @@
#include <sysdev/fsl_soc.h>
#include <sysdev/fsl_pci.h>
#include <soc/fsl/qe/qe.h>
#include <soc/fsl/qe/qe_ic.h>
#include "mpc83xx.h"
@ -202,7 +201,7 @@ define_machine(mpc836x_mds) {
.name = "MPC836x MDS",
.probe = mpc836x_mds_probe,
.setup_arch = mpc836x_mds_setup_arch,
.init_IRQ = mpc83xx_ipic_and_qe_init_IRQ,
.init_IRQ = mpc83xx_ipic_init_IRQ,
.get_irq = ipic_get_irq,
.restart = mpc83xx_restart,
.time_init = mpc83xx_time_init,

Просмотреть файл

@ -17,7 +17,6 @@
#include <asm/ipic.h>
#include <asm/udbg.h>
#include <soc/fsl/qe/qe.h>
#include <soc/fsl/qe/qe_ic.h>
#include <sysdev/fsl_soc.h>
#include <sysdev/fsl_pci.h>
@ -42,7 +41,7 @@ define_machine(mpc836x_rdk) {
.name = "MPC836x RDK",
.probe = mpc836x_rdk_probe,
.setup_arch = mpc836x_rdk_setup_arch,
.init_IRQ = mpc83xx_ipic_and_qe_init_IRQ,
.init_IRQ = mpc83xx_ipic_init_IRQ,
.get_irq = ipic_get_irq,
.restart = mpc83xx_restart,
.time_init = mpc83xx_time_init,

Просмотреть файл

@ -72,13 +72,6 @@ extern int mpc837x_usb_cfg(void);
extern int mpc834x_usb_cfg(void);
extern int mpc831x_usb_cfg(void);
extern void mpc83xx_ipic_init_IRQ(void);
#ifdef CONFIG_QUICC_ENGINE
extern void mpc83xx_qe_init_IRQ(void);
extern void mpc83xx_ipic_and_qe_init_IRQ(void);
#else
static inline void __init mpc83xx_qe_init_IRQ(void) {}
#define mpc83xx_ipic_and_qe_init_IRQ mpc83xx_ipic_init_IRQ
#endif /* CONFIG_QUICC_ENGINE */
#ifdef CONFIG_PCI
extern void mpc83xx_setup_pci(void);

Просмотреть файл

@ -24,7 +24,6 @@
#include <asm/mpic.h>
#include <asm/ehv_pic.h>
#include <asm/swiotlb.h>
#include <soc/fsl/qe/qe_ic.h>
#include <linux/of_platform.h>
#include <sysdev/fsl_soc.h>
@ -38,8 +37,6 @@ void __init corenet_gen_pic_init(void)
unsigned int flags = MPIC_BIG_ENDIAN | MPIC_SINGLE_DEST_CPU |
MPIC_NO_RESET;
struct device_node *np;
if (ppc_md.get_irq == mpic_get_coreint_irq)
flags |= MPIC_ENABLE_COREINT;
@ -47,13 +44,6 @@ void __init corenet_gen_pic_init(void)
BUG_ON(mpic == NULL);
mpic_init(mpic);
np = of_find_compatible_node(NULL, NULL, "fsl,qe-ic");
if (np) {
qe_ic_init(np, 0, qe_ic_cascade_low_mpic,
qe_ic_cascade_high_mpic);
of_node_put(np);
}
}
/*

Просмотреть файл

@ -44,7 +44,6 @@
#include <sysdev/fsl_soc.h>
#include <sysdev/fsl_pci.h>
#include <soc/fsl/qe/qe.h>
#include <soc/fsl/qe/qe_ic.h>
#include <asm/mpic.h>
#include <asm/swiotlb.h>
#include "smp.h"
@ -268,33 +267,8 @@ static void __init mpc85xx_mds_qe_init(void)
}
}
static void __init mpc85xx_mds_qeic_init(void)
{
struct device_node *np;
np = of_find_compatible_node(NULL, NULL, "fsl,qe");
if (!of_device_is_available(np)) {
of_node_put(np);
return;
}
np = of_find_compatible_node(NULL, NULL, "fsl,qe-ic");
if (!np) {
np = of_find_node_by_type(NULL, "qeic");
if (!np)
return;
}
if (machine_is(p1021_mds))
qe_ic_init(np, 0, qe_ic_cascade_low_mpic,
qe_ic_cascade_high_mpic);
else
qe_ic_init(np, 0, qe_ic_cascade_muxed_mpic, NULL);
of_node_put(np);
}
#else
static void __init mpc85xx_mds_qe_init(void) { }
static void __init mpc85xx_mds_qeic_init(void) { }
#endif /* CONFIG_QUICC_ENGINE */
static void __init mpc85xx_mds_setup_arch(void)
@ -364,7 +338,6 @@ static void __init mpc85xx_mds_pic_init(void)
BUG_ON(mpic == NULL);
mpic_init(mpic);
mpc85xx_mds_qeic_init();
}
static int __init mpc85xx_mds_probe(void)

Просмотреть файл

@ -23,7 +23,6 @@
#include <asm/udbg.h>
#include <asm/mpic.h>
#include <soc/fsl/qe/qe.h>
#include <soc/fsl/qe/qe_ic.h>
#include <sysdev/fsl_soc.h>
#include <sysdev/fsl_pci.h>
@ -44,10 +43,6 @@ void __init mpc85xx_rdb_pic_init(void)
{
struct mpic *mpic;
#ifdef CONFIG_QUICC_ENGINE
struct device_node *np;
#endif
if (of_machine_is_compatible("fsl,MPC85XXRDB-CAMP")) {
mpic = mpic_alloc(NULL, 0, MPIC_NO_RESET |
MPIC_BIG_ENDIAN |
@ -62,18 +57,6 @@ void __init mpc85xx_rdb_pic_init(void)
BUG_ON(mpic == NULL);
mpic_init(mpic);
#ifdef CONFIG_QUICC_ENGINE
np = of_find_compatible_node(NULL, NULL, "fsl,qe-ic");
if (np) {
qe_ic_init(np, 0, qe_ic_cascade_low_mpic,
qe_ic_cascade_high_mpic);
of_node_put(np);
} else
pr_err("%s: Could not find qe-ic node\n", __func__);
#endif
}
/*

Просмотреть файл

@ -19,7 +19,6 @@
#include <asm/udbg.h>
#include <asm/mpic.h>
#include <soc/fsl/qe/qe.h>
#include <soc/fsl/qe/qe_ic.h>
#include <sysdev/fsl_soc.h>
#include <sysdev/fsl_pci.h>
@ -31,26 +30,12 @@ static void __init twr_p1025_pic_init(void)
{
struct mpic *mpic;
#ifdef CONFIG_QUICC_ENGINE
struct device_node *np;
#endif
mpic = mpic_alloc(NULL, 0, MPIC_BIG_ENDIAN |
MPIC_SINGLE_DEST_CPU,
0, 256, " OpenPIC ");
BUG_ON(mpic == NULL);
mpic_init(mpic);
#ifdef CONFIG_QUICC_ENGINE
np = of_find_compatible_node(NULL, NULL, "fsl,qe-ic");
if (np) {
qe_ic_init(np, 0, qe_ic_cascade_low_mpic,
qe_ic_cascade_high_mpic);
of_node_put(np);
} else
pr_err("Could not find qe-ic node\n");
#endif
}
/* ************************************************************************

Просмотреть файл

@ -2302,6 +2302,44 @@ out:
}
EXPORT_SYMBOL_GPL(of_genpd_add_subdomain);
/**
* of_genpd_remove_subdomain - Remove a subdomain from an I/O PM domain.
* @parent_spec: OF phandle args to use for parent PM domain look-up
* @subdomain_spec: OF phandle args to use for subdomain look-up
*
* Looks-up a parent PM domain and subdomain based upon phandle args
* provided and removes the subdomain from the parent PM domain. Returns a
* negative error code on failure.
*/
int of_genpd_remove_subdomain(struct of_phandle_args *parent_spec,
struct of_phandle_args *subdomain_spec)
{
struct generic_pm_domain *parent, *subdomain;
int ret;
mutex_lock(&gpd_list_lock);
parent = genpd_get_from_provider(parent_spec);
if (IS_ERR(parent)) {
ret = PTR_ERR(parent);
goto out;
}
subdomain = genpd_get_from_provider(subdomain_spec);
if (IS_ERR(subdomain)) {
ret = PTR_ERR(subdomain);
goto out;
}
ret = pm_genpd_remove_subdomain(parent, subdomain);
out:
mutex_unlock(&gpd_list_lock);
return ret;
}
EXPORT_SYMBOL_GPL(of_genpd_remove_subdomain);
/**
* of_genpd_remove_last - Remove the last PM domain registered for a provider
* @provider: Pointer to device structure associated with provider

Просмотреть файл

@ -139,7 +139,6 @@ config TEGRA_ACONNECT
tristate "Tegra ACONNECT Bus Driver"
depends on ARCH_TEGRA_210_SOC
depends on OF && PM
select PM_CLK
help
Driver for the Tegra ACONNECT bus which is used to interface with
the devices inside the Audio Processing Engine (APE) for Tegra210.

Просмотреть файл

@ -102,12 +102,11 @@ static int moxtet_match(struct device *dev, struct device_driver *drv)
return 0;
}
struct bus_type moxtet_bus_type = {
static struct bus_type moxtet_bus_type = {
.name = "moxtet",
.dev_groups = moxtet_dev_groups,
.match = moxtet_match,
};
EXPORT_SYMBOL_GPL(moxtet_bus_type);
int __moxtet_register_driver(struct module *owner,
struct moxtet_driver *mdrv)

Просмотреть файл

@ -479,7 +479,7 @@ static void sysc_clkdm_deny_idle(struct sysc *ddata)
{
struct ti_sysc_platform_data *pdata;
if (ddata->legacy_mode)
if (ddata->legacy_mode || (ddata->cfg.quirks & SYSC_QUIRK_CLKDM_NOAUTO))
return;
pdata = dev_get_platdata(ddata->dev);
@ -491,7 +491,7 @@ static void sysc_clkdm_allow_idle(struct sysc *ddata)
{
struct ti_sysc_platform_data *pdata;
if (ddata->legacy_mode)
if (ddata->legacy_mode || (ddata->cfg.quirks & SYSC_QUIRK_CLKDM_NOAUTO))
return;
pdata = dev_get_platdata(ddata->dev);
@ -509,10 +509,8 @@ static int sysc_init_resets(struct sysc *ddata)
{
ddata->rsts =
devm_reset_control_get_optional_shared(ddata->dev, "rstctrl");
if (IS_ERR(ddata->rsts))
return PTR_ERR(ddata->rsts);
return 0;
return PTR_ERR_OR_ZERO(ddata->rsts);
}
/**
@ -1216,10 +1214,6 @@ static const struct sysc_revision_quirk sysc_revision_quirks[] = {
/* These drivers need to be fixed to not use pm_runtime_irq_safe() */
SYSC_QUIRK("gpio", 0, 0, 0x10, 0x114, 0x50600801, 0xffff00ff,
SYSC_QUIRK_LEGACY_IDLE | SYSC_QUIRK_OPT_CLKS_IN_RESET),
SYSC_QUIRK("mmu", 0, 0, 0x10, 0x14, 0x00000020, 0xffffffff,
SYSC_QUIRK_LEGACY_IDLE),
SYSC_QUIRK("mmu", 0, 0, 0x10, 0x14, 0x00000030, 0xffffffff,
SYSC_QUIRK_LEGACY_IDLE),
SYSC_QUIRK("sham", 0, 0x100, 0x110, 0x114, 0x40000c03, 0xffffffff,
SYSC_QUIRK_LEGACY_IDLE),
SYSC_QUIRK("smartreflex", 0, -1, 0x24, -1, 0x00000000, 0xffffffff,
@ -1251,6 +1245,12 @@ static const struct sysc_revision_quirk sysc_revision_quirks[] = {
/* Quirks that need to be set based on detected module */
SYSC_QUIRK("aess", 0, 0, 0x10, -1, 0x40000000, 0xffffffff,
SYSC_MODULE_QUIRK_AESS),
SYSC_QUIRK("dcan", 0x48480000, 0x20, -1, -1, 0xa3170504, 0xffffffff,
SYSC_QUIRK_CLKDM_NOAUTO),
SYSC_QUIRK("dwc3", 0x48880000, 0, 0x10, -1, 0x500a0200, 0xffffffff,
SYSC_QUIRK_CLKDM_NOAUTO),
SYSC_QUIRK("dwc3", 0x488c0000, 0, 0x10, -1, 0x500a0200, 0xffffffff,
SYSC_QUIRK_CLKDM_NOAUTO),
SYSC_QUIRK("hdq1w", 0, 0, 0x14, 0x18, 0x00000006, 0xffffffff,
SYSC_MODULE_QUIRK_HDQ1W),
SYSC_QUIRK("hdq1w", 0, 0, 0x14, 0x18, 0x0000000a, 0xffffffff,

Просмотреть файл

@ -176,7 +176,7 @@ static int scmi_clocks_probe(struct scmi_device *sdev)
}
static const struct scmi_device_id scmi_id_table[] = {
{ SCMI_PROTOCOL_CLOCK },
{ SCMI_PROTOCOL_CLOCK, "clocks" },
{ },
};
MODULE_DEVICE_TABLE(scmi, scmi_id_table);

Просмотреть файл

@ -261,7 +261,7 @@ static void scmi_cpufreq_remove(struct scmi_device *sdev)
}
static const struct scmi_device_id scmi_id_table[] = {
{ SCMI_PROTOCOL_PERF },
{ SCMI_PROTOCOL_PERF, "cpufreq" },
{ },
};
MODULE_DEVICE_TABLE(scmi, scmi_id_table);

Просмотреть файл

@ -21,7 +21,9 @@ obj-$(CONFIG_ARM_U8500_CPUIDLE) += cpuidle-ux500.o
obj-$(CONFIG_ARM_AT91_CPUIDLE) += cpuidle-at91.o
obj-$(CONFIG_ARM_EXYNOS_CPUIDLE) += cpuidle-exynos.o
obj-$(CONFIG_ARM_CPUIDLE) += cpuidle-arm.o
obj-$(CONFIG_ARM_PSCI_CPUIDLE) += cpuidle-psci.o
obj-$(CONFIG_ARM_PSCI_CPUIDLE) += cpuidle_psci.o
cpuidle_psci-y := cpuidle-psci.o
cpuidle_psci-$(CONFIG_PM_GENERIC_DOMAINS_OF) += cpuidle-psci-domain.o
###############################################################################
# MIPS drivers

Просмотреть файл

@ -0,0 +1,308 @@
// SPDX-License-Identifier: GPL-2.0
/*
* PM domains for CPUs via genpd - managed by cpuidle-psci.
*
* Copyright (C) 2019 Linaro Ltd.
* Author: Ulf Hansson <ulf.hansson@linaro.org>
*
*/
#define pr_fmt(fmt) "CPUidle PSCI: " fmt
#include <linux/cpu.h>
#include <linux/device.h>
#include <linux/kernel.h>
#include <linux/pm_domain.h>
#include <linux/pm_runtime.h>
#include <linux/psci.h>
#include <linux/slab.h>
#include <linux/string.h>
#include "cpuidle-psci.h"
struct psci_pd_provider {
struct list_head link;
struct device_node *node;
};
static LIST_HEAD(psci_pd_providers);
static bool osi_mode_enabled __initdata;
static int psci_pd_power_off(struct generic_pm_domain *pd)
{
struct genpd_power_state *state = &pd->states[pd->state_idx];
u32 *pd_state;
if (!state->data)
return 0;
/* OSI mode is enabled, set the corresponding domain state. */
pd_state = state->data;
psci_set_domain_state(*pd_state);
return 0;
}
static int __init psci_pd_parse_state_nodes(struct genpd_power_state *states,
int state_count)
{
int i, ret;
u32 psci_state, *psci_state_buf;
for (i = 0; i < state_count; i++) {
ret = psci_dt_parse_state_node(to_of_node(states[i].fwnode),
&psci_state);
if (ret)
goto free_state;
psci_state_buf = kmalloc(sizeof(u32), GFP_KERNEL);
if (!psci_state_buf) {
ret = -ENOMEM;
goto free_state;
}
*psci_state_buf = psci_state;
states[i].data = psci_state_buf;
}
return 0;
free_state:
i--;
for (; i >= 0; i--)
kfree(states[i].data);
return ret;
}
static int __init psci_pd_parse_states(struct device_node *np,
struct genpd_power_state **states, int *state_count)
{
int ret;
/* Parse the domain idle states. */
ret = of_genpd_parse_idle_states(np, states, state_count);
if (ret)
return ret;
/* Fill out the PSCI specifics for each found state. */
ret = psci_pd_parse_state_nodes(*states, *state_count);
if (ret)
kfree(*states);
return ret;
}
static void psci_pd_free_states(struct genpd_power_state *states,
unsigned int state_count)
{
int i;
for (i = 0; i < state_count; i++)
kfree(states[i].data);
kfree(states);
}
static int __init psci_pd_init(struct device_node *np)
{
struct generic_pm_domain *pd;
struct psci_pd_provider *pd_provider;
struct dev_power_governor *pd_gov;
struct genpd_power_state *states = NULL;
int ret = -ENOMEM, state_count = 0;
pd = kzalloc(sizeof(*pd), GFP_KERNEL);
if (!pd)
goto out;
pd_provider = kzalloc(sizeof(*pd_provider), GFP_KERNEL);
if (!pd_provider)
goto free_pd;
pd->name = kasprintf(GFP_KERNEL, "%pOF", np);
if (!pd->name)
goto free_pd_prov;
/*
* Parse the domain idle states and let genpd manage the state selection
* for those being compatible with "domain-idle-state".
*/
ret = psci_pd_parse_states(np, &states, &state_count);
if (ret)
goto free_name;
pd->free_states = psci_pd_free_states;
pd->name = kbasename(pd->name);
pd->power_off = psci_pd_power_off;
pd->states = states;
pd->state_count = state_count;
pd->flags |= GENPD_FLAG_IRQ_SAFE | GENPD_FLAG_CPU_DOMAIN;
/* Use governor for CPU PM domains if it has some states to manage. */
pd_gov = state_count > 0 ? &pm_domain_cpu_gov : NULL;
ret = pm_genpd_init(pd, pd_gov, false);
if (ret) {
psci_pd_free_states(states, state_count);
goto free_name;
}
ret = of_genpd_add_provider_simple(np, pd);
if (ret)
goto remove_pd;
pd_provider->node = of_node_get(np);
list_add(&pd_provider->link, &psci_pd_providers);
pr_debug("init PM domain %s\n", pd->name);
return 0;
remove_pd:
pm_genpd_remove(pd);
free_name:
kfree(pd->name);
free_pd_prov:
kfree(pd_provider);
free_pd:
kfree(pd);
out:
pr_err("failed to init PM domain ret=%d %pOF\n", ret, np);
return ret;
}
static void __init psci_pd_remove(void)
{
struct psci_pd_provider *pd_provider, *it;
struct generic_pm_domain *genpd;
list_for_each_entry_safe(pd_provider, it, &psci_pd_providers, link) {
of_genpd_del_provider(pd_provider->node);
genpd = of_genpd_remove_last(pd_provider->node);
if (!IS_ERR(genpd))
kfree(genpd);
of_node_put(pd_provider->node);
list_del(&pd_provider->link);
kfree(pd_provider);
}
}
static int __init psci_pd_init_topology(struct device_node *np, bool add)
{
struct device_node *node;
struct of_phandle_args child, parent;
int ret;
for_each_child_of_node(np, node) {
if (of_parse_phandle_with_args(node, "power-domains",
"#power-domain-cells", 0, &parent))
continue;
child.np = node;
child.args_count = 0;
ret = add ? of_genpd_add_subdomain(&parent, &child) :
of_genpd_remove_subdomain(&parent, &child);
of_node_put(parent.np);
if (ret) {
of_node_put(node);
return ret;
}
}
return 0;
}
static int __init psci_pd_add_topology(struct device_node *np)
{
return psci_pd_init_topology(np, true);
}
static void __init psci_pd_remove_topology(struct device_node *np)
{
psci_pd_init_topology(np, false);
}
static const struct of_device_id psci_of_match[] __initconst = {
{ .compatible = "arm,psci-1.0" },
{}
};
static int __init psci_idle_init_domains(void)
{
struct device_node *np = of_find_matching_node(NULL, psci_of_match);
struct device_node *node;
int ret = 0, pd_count = 0;
if (!np)
return -ENODEV;
/* Currently limit the hierarchical topology to be used in OSI mode. */
if (!psci_has_osi_support())
goto out;
/*
* Parse child nodes for the "#power-domain-cells" property and
* initialize a genpd/genpd-of-provider pair when it's found.
*/
for_each_child_of_node(np, node) {
if (!of_find_property(node, "#power-domain-cells", NULL))
continue;
ret = psci_pd_init(node);
if (ret)
goto put_node;
pd_count++;
}
/* Bail out if not using the hierarchical CPU topology. */
if (!pd_count)
goto out;
/* Link genpd masters/subdomains to model the CPU topology. */
ret = psci_pd_add_topology(np);
if (ret)
goto remove_pd;
/* Try to enable OSI mode. */
ret = psci_set_osi_mode();
if (ret) {
pr_warn("failed to enable OSI mode: %d\n", ret);
psci_pd_remove_topology(np);
goto remove_pd;
}
osi_mode_enabled = true;
of_node_put(np);
pr_info("Initialized CPU PM domain topology\n");
return pd_count;
put_node:
of_node_put(node);
remove_pd:
if (pd_count)
psci_pd_remove();
pr_err("failed to create CPU PM domains ret=%d\n", ret);
out:
of_node_put(np);
return ret;
}
subsys_initcall(psci_idle_init_domains);
struct device __init *psci_dt_attach_cpu(int cpu)
{
struct device *dev;
if (!osi_mode_enabled)
return NULL;
dev = dev_pm_domain_attach_by_name(get_cpu_device(cpu), "psci");
if (IS_ERR_OR_NULL(dev))
return dev;
pm_runtime_irq_safe(dev);
if (cpu_online(cpu))
pm_runtime_get_sync(dev);
return dev;
}

Просмотреть файл

@ -8,6 +8,7 @@
#define pr_fmt(fmt) "CPUidle PSCI: " fmt
#include <linux/cpuhotplug.h>
#include <linux/cpuidle.h>
#include <linux/cpumask.h>
#include <linux/cpu_pm.h>
@ -16,21 +17,107 @@
#include <linux/of.h>
#include <linux/of_device.h>
#include <linux/psci.h>
#include <linux/pm_runtime.h>
#include <linux/slab.h>
#include <asm/cpuidle.h>
#include "cpuidle-psci.h"
#include "dt_idle_states.h"
static DEFINE_PER_CPU_READ_MOSTLY(u32 *, psci_power_state);
struct psci_cpuidle_data {
u32 *psci_states;
struct device *dev;
};
static DEFINE_PER_CPU_READ_MOSTLY(struct psci_cpuidle_data, psci_cpuidle_data);
static DEFINE_PER_CPU(u32, domain_state);
static bool psci_cpuidle_use_cpuhp __initdata;
void psci_set_domain_state(u32 state)
{
__this_cpu_write(domain_state, state);
}
static inline u32 psci_get_domain_state(void)
{
return __this_cpu_read(domain_state);
}
static inline int psci_enter_state(int idx, u32 state)
{
return CPU_PM_CPU_IDLE_ENTER_PARAM(psci_cpu_suspend_enter, idx, state);
}
static int psci_enter_domain_idle_state(struct cpuidle_device *dev,
struct cpuidle_driver *drv, int idx)
{
struct psci_cpuidle_data *data = this_cpu_ptr(&psci_cpuidle_data);
u32 *states = data->psci_states;
struct device *pd_dev = data->dev;
u32 state;
int ret;
/* Do runtime PM to manage a hierarchical CPU toplogy. */
pm_runtime_put_sync_suspend(pd_dev);
state = psci_get_domain_state();
if (!state)
state = states[idx];
ret = psci_enter_state(idx, state);
pm_runtime_get_sync(pd_dev);
/* Clear the domain state to start fresh when back from idle. */
psci_set_domain_state(0);
return ret;
}
static int psci_idle_cpuhp_up(unsigned int cpu)
{
struct device *pd_dev = __this_cpu_read(psci_cpuidle_data.dev);
if (pd_dev)
pm_runtime_get_sync(pd_dev);
return 0;
}
static int psci_idle_cpuhp_down(unsigned int cpu)
{
struct device *pd_dev = __this_cpu_read(psci_cpuidle_data.dev);
if (pd_dev) {
pm_runtime_put_sync(pd_dev);
/* Clear domain state to start fresh at next online. */
psci_set_domain_state(0);
}
return 0;
}
static void __init psci_idle_init_cpuhp(void)
{
int err;
if (!psci_cpuidle_use_cpuhp)
return;
err = cpuhp_setup_state_nocalls(CPUHP_AP_CPU_PM_STARTING,
"cpuidle/psci:online",
psci_idle_cpuhp_up,
psci_idle_cpuhp_down);
if (err)
pr_warn("Failed %d while setup cpuhp state\n", err);
}
static int psci_enter_idle_state(struct cpuidle_device *dev,
struct cpuidle_driver *drv, int idx)
{
u32 *state = __this_cpu_read(psci_power_state);
u32 *state = __this_cpu_read(psci_cpuidle_data.psci_states);
return CPU_PM_CPU_IDLE_ENTER_PARAM(psci_cpu_suspend_enter,
idx, state[idx - 1]);
return psci_enter_state(idx, state[idx]);
}
static struct cpuidle_driver psci_idle_driver __initdata = {
@ -56,7 +143,7 @@ static const struct of_device_id psci_idle_state_match[] __initconst = {
{ },
};
static int __init psci_dt_parse_state_node(struct device_node *np, u32 *state)
int __init psci_dt_parse_state_node(struct device_node *np, u32 *state)
{
int err = of_property_read_u32(np, "arm,psci-suspend-param", state);
@ -73,28 +160,25 @@ static int __init psci_dt_parse_state_node(struct device_node *np, u32 *state)
return 0;
}
static int __init psci_dt_cpu_init_idle(struct device_node *cpu_node, int cpu)
static int __init psci_dt_cpu_init_idle(struct cpuidle_driver *drv,
struct device_node *cpu_node,
unsigned int state_count, int cpu)
{
int i, ret = 0, count = 0;
int i, ret = 0;
u32 *psci_states;
struct device_node *state_node;
struct psci_cpuidle_data *data = per_cpu_ptr(&psci_cpuidle_data, cpu);
/* Count idle states */
while ((state_node = of_parse_phandle(cpu_node, "cpu-idle-states",
count))) {
count++;
of_node_put(state_node);
}
if (!count)
return -ENODEV;
psci_states = kcalloc(count, sizeof(*psci_states), GFP_KERNEL);
state_count++; /* Add WFI state too */
psci_states = kcalloc(state_count, sizeof(*psci_states), GFP_KERNEL);
if (!psci_states)
return -ENOMEM;
for (i = 0; i < count; i++) {
state_node = of_parse_phandle(cpu_node, "cpu-idle-states", i);
for (i = 1; i < state_count; i++) {
state_node = of_get_cpu_state_node(cpu_node, i - 1);
if (!state_node)
break;
ret = psci_dt_parse_state_node(state_node, &psci_states[i]);
of_node_put(state_node);
@ -104,8 +188,33 @@ static int __init psci_dt_cpu_init_idle(struct device_node *cpu_node, int cpu)
pr_debug("psci-power-state %#x index %d\n", psci_states[i], i);
}
/* Idle states parsed correctly, initialize per-cpu pointer */
per_cpu(psci_power_state, cpu) = psci_states;
if (i != state_count) {
ret = -ENODEV;
goto free_mem;
}
/* Currently limit the hierarchical topology to be used in OSI mode. */
if (psci_has_osi_support()) {
data->dev = psci_dt_attach_cpu(cpu);
if (IS_ERR(data->dev)) {
ret = PTR_ERR(data->dev);
goto free_mem;
}
/*
* Using the deepest state for the CPU to trigger a potential
* selection of a shared state for the domain, assumes the
* domain states are all deeper states.
*/
if (data->dev) {
drv->states[state_count - 1].enter =
psci_enter_domain_idle_state;
psci_cpuidle_use_cpuhp = true;
}
}
/* Idle states parsed correctly, store them in the per-cpu struct. */
data->psci_states = psci_states;
return 0;
free_mem:
@ -113,7 +222,8 @@ free_mem:
return ret;
}
static __init int psci_cpu_init_idle(unsigned int cpu)
static __init int psci_cpu_init_idle(struct cpuidle_driver *drv,
unsigned int cpu, unsigned int state_count)
{
struct device_node *cpu_node;
int ret;
@ -129,7 +239,7 @@ static __init int psci_cpu_init_idle(unsigned int cpu)
if (!cpu_node)
return -ENODEV;
ret = psci_dt_cpu_init_idle(cpu_node, cpu);
ret = psci_dt_cpu_init_idle(drv, cpu_node, state_count, cpu);
of_node_put(cpu_node);
@ -185,7 +295,7 @@ static int __init psci_idle_init_cpu(int cpu)
/*
* Initialize PSCI idle states.
*/
ret = psci_cpu_init_idle(cpu);
ret = psci_cpu_init_idle(drv, cpu, ret);
if (ret) {
pr_err("CPU %d failed to PSCI idle\n", cpu);
goto out_kfree_drv;
@ -221,6 +331,7 @@ static int __init psci_idle_init(void)
goto out_fail;
}
psci_idle_init_cpuhp();
return 0;
out_fail:

Просмотреть файл

@ -0,0 +1,17 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef __CPUIDLE_PSCI_H
#define __CPUIDLE_PSCI_H
struct device_node;
void psci_set_domain_state(u32 state);
int __init psci_dt_parse_state_node(struct device_node *np, u32 *state);
#ifdef CONFIG_PM_GENERIC_DOMAINS_OF
struct device __init *psci_dt_attach_cpu(int cpu);
#else
static inline struct device __init *psci_dt_attach_cpu(int cpu) { return NULL; }
#endif
#endif /* __CPUIDLE_PSCI_H */

Просмотреть файл

@ -111,8 +111,7 @@ static bool idle_state_valid(struct device_node *state_node, unsigned int idx,
for (cpu = cpumask_next(cpumask_first(cpumask), cpumask);
cpu < nr_cpu_ids; cpu = cpumask_next(cpu, cpumask)) {
cpu_node = of_cpu_device_node_get(cpu);
curr_state_node = of_parse_phandle(cpu_node, "cpu-idle-states",
idx);
curr_state_node = of_get_cpu_state_node(cpu_node, idx);
if (state_node != curr_state_node)
valid = false;
@ -170,7 +169,7 @@ int dt_init_idle_driver(struct cpuidle_driver *drv,
cpu_node = of_cpu_device_node_get(cpumask_first(cpumask));
for (i = 0; ; i++) {
state_node = of_parse_phandle(cpu_node, "cpu-idle-states", i);
state_node = of_get_cpu_state_node(cpu_node, i);
if (!state_node)
break;

Просмотреть файл

@ -239,14 +239,6 @@ config QCOM_SCM
depends on ARM || ARM64
select RESET_CONTROLLER
config QCOM_SCM_32
def_bool y
depends on QCOM_SCM && ARM
config QCOM_SCM_64
def_bool y
depends on QCOM_SCM && ARM64
config QCOM_SCM_DOWNLOAD_MODE_DEFAULT
bool "Qualcomm download mode enabled by default"
depends on QCOM_SCM

Просмотреть файл

@ -17,10 +17,7 @@ obj-$(CONFIG_ISCSI_IBFT) += iscsi_ibft.o
obj-$(CONFIG_FIRMWARE_MEMMAP) += memmap.o
obj-$(CONFIG_RASPBERRYPI_FIRMWARE) += raspberrypi.o
obj-$(CONFIG_FW_CFG_SYSFS) += qemu_fw_cfg.o
obj-$(CONFIG_QCOM_SCM) += qcom_scm.o
obj-$(CONFIG_QCOM_SCM_64) += qcom_scm-64.o
obj-$(CONFIG_QCOM_SCM_32) += qcom_scm-32.o
CFLAGS_qcom_scm-32.o :=$(call as-instr,.arch armv7-a\n.arch_extension sec,-DREQUIRES_SEC=1) -march=armv7-a
obj-$(CONFIG_QCOM_SCM) += qcom_scm.o qcom_scm-smc.o qcom_scm-legacy.o
obj-$(CONFIG_TI_SCI_PROTOCOL) += ti_sci.o
obj-$(CONFIG_TRUSTED_FOUNDATIONS) += trusted_foundations.o
obj-$(CONFIG_TURRIS_MOX_RWTM) += turris-mox-rwtm.o

Просмотреть файл

@ -28,8 +28,12 @@ scmi_dev_match_id(struct scmi_device *scmi_dev, struct scmi_driver *scmi_drv)
return NULL;
for (; id->protocol_id; id++)
if (id->protocol_id == scmi_dev->protocol_id)
return id;
if (id->protocol_id == scmi_dev->protocol_id) {
if (!id->name)
return id;
else if (!strcmp(id->name, scmi_dev->name))
return id;
}
return NULL;
}
@ -56,6 +60,11 @@ static int scmi_protocol_init(int protocol_id, struct scmi_handle *handle)
return fn(handle);
}
static int scmi_protocol_dummy_init(struct scmi_handle *handle)
{
return 0;
}
static int scmi_dev_probe(struct device *dev)
{
struct scmi_driver *scmi_drv = to_scmi_driver(dev->driver);
@ -74,6 +83,10 @@ static int scmi_dev_probe(struct device *dev)
if (ret)
return ret;
/* Skip protocol initialisation for additional devices */
idr_replace(&scmi_protocols, &scmi_protocol_dummy_init,
scmi_dev->protocol_id);
return scmi_drv->probe(scmi_dev);
}
@ -125,7 +138,8 @@ static void scmi_device_release(struct device *dev)
}
struct scmi_device *
scmi_device_create(struct device_node *np, struct device *parent, int protocol)
scmi_device_create(struct device_node *np, struct device *parent, int protocol,
const char *name)
{
int id, retval;
struct scmi_device *scmi_dev;
@ -134,8 +148,15 @@ scmi_device_create(struct device_node *np, struct device *parent, int protocol)
if (!scmi_dev)
return NULL;
scmi_dev->name = kstrdup_const(name ?: "unknown", GFP_KERNEL);
if (!scmi_dev->name) {
kfree(scmi_dev);
return NULL;
}
id = ida_simple_get(&scmi_bus_id, 1, 0, GFP_KERNEL);
if (id < 0) {
kfree_const(scmi_dev->name);
kfree(scmi_dev);
return NULL;
}
@ -154,6 +175,7 @@ scmi_device_create(struct device_node *np, struct device *parent, int protocol)
return scmi_dev;
put_dev:
kfree_const(scmi_dev->name);
put_device(&scmi_dev->dev);
ida_simple_remove(&scmi_bus_id, id);
return NULL;
@ -161,6 +183,7 @@ put_dev:
void scmi_device_destroy(struct scmi_device *scmi_dev)
{
kfree_const(scmi_dev->name);
scmi_handle_put(scmi_dev->handle);
ida_simple_remove(&scmi_bus_id, scmi_dev->id);
device_unregister(&scmi_dev->dev);

Просмотреть файл

@ -65,6 +65,7 @@ struct scmi_clock_set_rate {
};
struct clock_info {
u32 version;
int num_clocks;
int max_async_req;
atomic_t cur_async_req;
@ -340,6 +341,7 @@ static int scmi_clock_protocol_init(struct scmi_handle *handle)
scmi_clock_describe_rates_get(handle, clkid, clk);
}
cinfo->version = version;
handle->clk_ops = &clk_ops;
handle->clk_priv = cinfo;

Просмотреть файл

@ -81,6 +81,7 @@ struct scmi_msg {
/**
* struct scmi_xfer - Structure representing a message flow
*
* @transfer_id: Unique ID for debug & profiling purpose
* @hdr: Transmit message header
* @tx: Transmit message
* @rx: Receive message, the buffer should be pre-allocated to store
@ -90,6 +91,7 @@ struct scmi_msg {
* @async: pointer to delayed response message received event completion
*/
struct scmi_xfer {
int transfer_id;
struct scmi_msg_hdr hdr;
struct scmi_msg tx;
struct scmi_msg rx;

Просмотреть файл

@ -29,6 +29,9 @@
#include "common.h"
#define CREATE_TRACE_POINTS
#include <trace/events/scmi.h>
#define MSG_ID_MASK GENMASK(7, 0)
#define MSG_XTRACT_ID(hdr) FIELD_GET(MSG_ID_MASK, (hdr))
#define MSG_TYPE_MASK GENMASK(9, 8)
@ -61,6 +64,8 @@ enum scmi_error_codes {
static LIST_HEAD(scmi_list);
/* Protection for the entire list */
static DEFINE_MUTEX(scmi_list_mutex);
/* Track the unique id for the transfers for debug & profiling purpose */
static atomic_t transfer_last_id;
/**
* struct scmi_xfers_info - Structure to manage transfer information
@ -304,6 +309,7 @@ static struct scmi_xfer *scmi_xfer_get(const struct scmi_handle *handle,
xfer = &minfo->xfer_block[xfer_id];
xfer->hdr.seq = xfer_id;
reinit_completion(&xfer->done);
xfer->transfer_id = atomic_inc_return(&transfer_last_id);
return xfer;
}
@ -374,6 +380,10 @@ static void scmi_rx_callback(struct mbox_client *cl, void *m)
scmi_fetch_response(xfer, mem);
trace_scmi_rx_done(xfer->transfer_id, xfer->hdr.id,
xfer->hdr.protocol_id, xfer->hdr.seq,
msg_type);
if (msg_type == MSG_TYPE_DELAYED_RESP)
complete(xfer->async_done);
else
@ -439,6 +449,10 @@ int scmi_do_xfer(const struct scmi_handle *handle, struct scmi_xfer *xfer)
if (unlikely(!cinfo))
return -EINVAL;
trace_scmi_xfer_begin(xfer->transfer_id, xfer->hdr.id,
xfer->hdr.protocol_id, xfer->hdr.seq,
xfer->hdr.poll_completion);
ret = mbox_send_message(cinfo->chan, xfer);
if (ret < 0) {
dev_dbg(dev, "mbox send fail %d\n", ret);
@ -478,6 +492,10 @@ int scmi_do_xfer(const struct scmi_handle *handle, struct scmi_xfer *xfer)
*/
mbox_client_txdone(cinfo->chan, ret);
trace_scmi_xfer_end(xfer->transfer_id, xfer->hdr.id,
xfer->hdr.protocol_id, xfer->hdr.seq,
xfer->hdr.status);
return ret;
}
@ -735,6 +753,11 @@ static int scmi_mbox_chan_setup(struct scmi_info *info, struct device *dev,
idx = tx ? 0 : 1;
idr = tx ? &info->tx_idr : &info->rx_idr;
/* check if already allocated, used for multiple device per protocol */
cinfo = idr_find(idr, prot_id);
if (cinfo)
return 0;
if (scmi_mailbox_check(np, idx)) {
cinfo = idr_find(idr, SCMI_PROTOCOL_BASE);
if (unlikely(!cinfo)) /* Possible only if platform has no Rx */
@ -803,11 +826,11 @@ scmi_mbox_txrx_setup(struct scmi_info *info, struct device *dev, int prot_id)
static inline void
scmi_create_protocol_device(struct device_node *np, struct scmi_info *info,
int prot_id)
int prot_id, const char *name)
{
struct scmi_device *sdev;
sdev = scmi_device_create(np, info->dev, prot_id);
sdev = scmi_device_create(np, info->dev, prot_id, name);
if (!sdev) {
dev_err(info->dev, "failed to create %d protocol device\n",
prot_id);
@ -824,6 +847,40 @@ scmi_create_protocol_device(struct device_node *np, struct scmi_info *info,
scmi_set_handle(sdev);
}
#define MAX_SCMI_DEV_PER_PROTOCOL 2
struct scmi_prot_devnames {
int protocol_id;
char *names[MAX_SCMI_DEV_PER_PROTOCOL];
};
static struct scmi_prot_devnames devnames[] = {
{ SCMI_PROTOCOL_POWER, { "genpd" },},
{ SCMI_PROTOCOL_PERF, { "cpufreq" },},
{ SCMI_PROTOCOL_CLOCK, { "clocks" },},
{ SCMI_PROTOCOL_SENSOR, { "hwmon" },},
{ SCMI_PROTOCOL_RESET, { "reset" },},
};
static inline void
scmi_create_protocol_devices(struct device_node *np, struct scmi_info *info,
int prot_id)
{
int loop, cnt;
for (loop = 0; loop < ARRAY_SIZE(devnames); loop++) {
if (devnames[loop].protocol_id != prot_id)
continue;
for (cnt = 0; cnt < ARRAY_SIZE(devnames[loop].names); cnt++) {
const char *name = devnames[loop].names[cnt];
if (name)
scmi_create_protocol_device(np, info, prot_id,
name);
}
}
}
static int scmi_probe(struct platform_device *pdev)
{
int ret;
@ -892,7 +949,7 @@ static int scmi_probe(struct platform_device *pdev)
continue;
}
scmi_create_protocol_device(child, info, prot_id);
scmi_create_protocol_devices(child, info, prot_id);
}
return 0;
@ -940,6 +997,52 @@ static int scmi_remove(struct platform_device *pdev)
return ret;
}
static ssize_t protocol_version_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct scmi_info *info = dev_get_drvdata(dev);
return sprintf(buf, "%u.%u\n", info->version.major_ver,
info->version.minor_ver);
}
static DEVICE_ATTR_RO(protocol_version);
static ssize_t firmware_version_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct scmi_info *info = dev_get_drvdata(dev);
return sprintf(buf, "0x%x\n", info->version.impl_ver);
}
static DEVICE_ATTR_RO(firmware_version);
static ssize_t vendor_id_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct scmi_info *info = dev_get_drvdata(dev);
return sprintf(buf, "%s\n", info->version.vendor_id);
}
static DEVICE_ATTR_RO(vendor_id);
static ssize_t sub_vendor_id_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct scmi_info *info = dev_get_drvdata(dev);
return sprintf(buf, "%s\n", info->version.sub_vendor_id);
}
static DEVICE_ATTR_RO(sub_vendor_id);
static struct attribute *versions_attrs[] = {
&dev_attr_firmware_version.attr,
&dev_attr_protocol_version.attr,
&dev_attr_vendor_id.attr,
&dev_attr_sub_vendor_id.attr,
NULL,
};
ATTRIBUTE_GROUPS(versions);
static const struct scmi_desc scmi_generic_desc = {
.max_rx_timeout_ms = 30, /* We may increase this if required */
.max_msg = 20, /* Limited by MBOX_TX_QUEUE_LEN */
@ -958,6 +1061,7 @@ static struct platform_driver scmi_driver = {
.driver = {
.name = "arm-scmi",
.of_match_table = scmi_of_match,
.dev_groups = versions_groups,
},
.probe = scmi_probe,
.remove = scmi_remove,

Просмотреть файл

@ -145,6 +145,7 @@ struct perf_dom_info {
};
struct scmi_perf_info {
u32 version;
int num_domains;
bool power_scale_mw;
u64 stats_addr;
@ -736,6 +737,7 @@ static int scmi_perf_protocol_init(struct scmi_handle *handle)
scmi_perf_domain_init_fc(handle, domain, &dom->fc_info);
}
pinfo->version = version;
handle->perf_ops = &perf_ops;
handle->perf_priv = pinfo;

Просмотреть файл

@ -50,6 +50,7 @@ struct power_dom_info {
};
struct scmi_power_info {
u32 version;
int num_domains;
u64 stats_addr;
u32 stats_size;
@ -207,6 +208,7 @@ static int scmi_power_protocol_init(struct scmi_handle *handle)
scmi_power_domain_attributes_get(handle, domain, dom);
}
pinfo->version = version;
handle->power_ops = &power_ops;
handle->power_priv = pinfo;

Просмотреть файл

@ -48,6 +48,7 @@ struct reset_dom_info {
};
struct scmi_reset_info {
u32 version;
int num_domains;
struct reset_dom_info *dom_info;
};
@ -217,6 +218,7 @@ static int scmi_reset_protocol_init(struct scmi_handle *handle)
scmi_reset_domain_attributes_get(handle, domain, dom);
}
pinfo->version = version;
handle->reset_ops = &reset_ops;
handle->reset_priv = pinfo;

Просмотреть файл

@ -112,7 +112,7 @@ static int scmi_pm_domain_probe(struct scmi_device *sdev)
}
static const struct scmi_device_id scmi_id_table[] = {
{ SCMI_PROTOCOL_POWER },
{ SCMI_PROTOCOL_POWER, "genpd" },
{ },
};
MODULE_DEVICE_TABLE(scmi, scmi_id_table);

Просмотреть файл

@ -68,6 +68,7 @@ struct scmi_msg_sensor_reading_get {
};
struct sensors_info {
u32 version;
int num_sensors;
int max_requests;
u64 reg_addr;
@ -294,6 +295,7 @@ static int scmi_sensors_protocol_init(struct scmi_handle *handle)
scmi_sensor_description_get(handle, sinfo);
sinfo->version = version;
handle->sensor_ops = &sensor_ops;
handle->sensor_priv = sinfo;

Просмотреть файл

@ -1,6 +1,6 @@
# SPDX-License-Identifier: GPL-2.0-only
config IMX_DSP
bool "IMX DSP Protocol driver"
tristate "IMX DSP Protocol driver"
depends on IMX_MBOX
help
This enables DSP IPC protocol between host AP (Linux)

Просмотреть файл

@ -97,7 +97,7 @@ static inline bool psci_has_ext_power_state(void)
PSCI_1_0_FEATURES_CPU_SUSPEND_PF_MASK;
}
static inline bool psci_has_osi_support(void)
bool psci_has_osi_support(void)
{
return psci_cpu_suspend_feature & PSCI_1_0_OS_INITIATED;
}
@ -162,6 +162,15 @@ static u32 psci_get_version(void)
return invoke_psci_fn(PSCI_0_2_FN_PSCI_VERSION, 0, 0, 0);
}
int psci_set_osi_mode(void)
{
int err;
err = invoke_psci_fn(PSCI_1_0_FN_SET_SUSPEND_MODE,
PSCI_1_0_SUSPEND_MODE_OSI, 0, 0);
return psci_to_linux_errno(err);
}
static int psci_cpu_suspend(u32 state, unsigned long entry_point)
{
int err;
@ -544,9 +553,14 @@ static int __init psci_1_0_init(struct device_node *np)
if (err)
return err;
if (psci_has_osi_support())
if (psci_has_osi_support()) {
pr_info("OSI mode supported.\n");
/* Default to PC mode. */
invoke_psci_fn(PSCI_1_0_FN_SET_SUSPEND_MODE,
PSCI_1_0_SUSPEND_MODE_PC, 0, 0);
}
return 0;
}

Просмотреть файл

@ -1,671 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-only
/* Copyright (c) 2010,2015, The Linux Foundation. All rights reserved.
* Copyright (C) 2015 Linaro Ltd.
*/
#include <linux/slab.h>
#include <linux/io.h>
#include <linux/module.h>
#include <linux/mutex.h>
#include <linux/errno.h>
#include <linux/err.h>
#include <linux/qcom_scm.h>
#include <linux/dma-mapping.h>
#include "qcom_scm.h"
#define QCOM_SCM_FLAG_COLDBOOT_CPU0 0x00
#define QCOM_SCM_FLAG_COLDBOOT_CPU1 0x01
#define QCOM_SCM_FLAG_COLDBOOT_CPU2 0x08
#define QCOM_SCM_FLAG_COLDBOOT_CPU3 0x20
#define QCOM_SCM_FLAG_WARMBOOT_CPU0 0x04
#define QCOM_SCM_FLAG_WARMBOOT_CPU1 0x02
#define QCOM_SCM_FLAG_WARMBOOT_CPU2 0x10
#define QCOM_SCM_FLAG_WARMBOOT_CPU3 0x40
struct qcom_scm_entry {
int flag;
void *entry;
};
static struct qcom_scm_entry qcom_scm_wb[] = {
{ .flag = QCOM_SCM_FLAG_WARMBOOT_CPU0 },
{ .flag = QCOM_SCM_FLAG_WARMBOOT_CPU1 },
{ .flag = QCOM_SCM_FLAG_WARMBOOT_CPU2 },
{ .flag = QCOM_SCM_FLAG_WARMBOOT_CPU3 },
};
static DEFINE_MUTEX(qcom_scm_lock);
/**
* struct qcom_scm_command - one SCM command buffer
* @len: total available memory for command and response
* @buf_offset: start of command buffer
* @resp_hdr_offset: start of response buffer
* @id: command to be executed
* @buf: buffer returned from qcom_scm_get_command_buffer()
*
* An SCM command is laid out in memory as follows:
*
* ------------------- <--- struct qcom_scm_command
* | command header |
* ------------------- <--- qcom_scm_get_command_buffer()
* | command buffer |
* ------------------- <--- struct qcom_scm_response and
* | response header | qcom_scm_command_to_response()
* ------------------- <--- qcom_scm_get_response_buffer()
* | response buffer |
* -------------------
*
* There can be arbitrary padding between the headers and buffers so
* you should always use the appropriate qcom_scm_get_*_buffer() routines
* to access the buffers in a safe manner.
*/
struct qcom_scm_command {
__le32 len;
__le32 buf_offset;
__le32 resp_hdr_offset;
__le32 id;
__le32 buf[0];
};
/**
* struct qcom_scm_response - one SCM response buffer
* @len: total available memory for response
* @buf_offset: start of response data relative to start of qcom_scm_response
* @is_complete: indicates if the command has finished processing
*/
struct qcom_scm_response {
__le32 len;
__le32 buf_offset;
__le32 is_complete;
};
/**
* qcom_scm_command_to_response() - Get a pointer to a qcom_scm_response
* @cmd: command
*
* Returns a pointer to a response for a command.
*/
static inline struct qcom_scm_response *qcom_scm_command_to_response(
const struct qcom_scm_command *cmd)
{
return (void *)cmd + le32_to_cpu(cmd->resp_hdr_offset);
}
/**
* qcom_scm_get_command_buffer() - Get a pointer to a command buffer
* @cmd: command
*
* Returns a pointer to the command buffer of a command.
*/
static inline void *qcom_scm_get_command_buffer(const struct qcom_scm_command *cmd)
{
return (void *)cmd->buf;
}
/**
* qcom_scm_get_response_buffer() - Get a pointer to a response buffer
* @rsp: response
*
* Returns a pointer to a response buffer of a response.
*/
static inline void *qcom_scm_get_response_buffer(const struct qcom_scm_response *rsp)
{
return (void *)rsp + le32_to_cpu(rsp->buf_offset);
}
static u32 smc(u32 cmd_addr)
{
int context_id;
register u32 r0 asm("r0") = 1;
register u32 r1 asm("r1") = (u32)&context_id;
register u32 r2 asm("r2") = cmd_addr;
do {
asm volatile(
__asmeq("%0", "r0")
__asmeq("%1", "r0")
__asmeq("%2", "r1")
__asmeq("%3", "r2")
#ifdef REQUIRES_SEC
".arch_extension sec\n"
#endif
"smc #0 @ switch to secure world\n"
: "=r" (r0)
: "r" (r0), "r" (r1), "r" (r2)
: "r3", "r12");
} while (r0 == QCOM_SCM_INTERRUPTED);
return r0;
}
/**
* qcom_scm_call() - Send an SCM command
* @dev: struct device
* @svc_id: service identifier
* @cmd_id: command identifier
* @cmd_buf: command buffer
* @cmd_len: length of the command buffer
* @resp_buf: response buffer
* @resp_len: length of the response buffer
*
* Sends a command to the SCM and waits for the command to finish processing.
*
* A note on cache maintenance:
* Note that any buffers that are expected to be accessed by the secure world
* must be flushed before invoking qcom_scm_call and invalidated in the cache
* immediately after qcom_scm_call returns. Cache maintenance on the command
* and response buffers is taken care of by qcom_scm_call; however, callers are
* responsible for any other cached buffers passed over to the secure world.
*/
static int qcom_scm_call(struct device *dev, u32 svc_id, u32 cmd_id,
const void *cmd_buf, size_t cmd_len, void *resp_buf,
size_t resp_len)
{
int ret;
struct qcom_scm_command *cmd;
struct qcom_scm_response *rsp;
size_t alloc_len = sizeof(*cmd) + cmd_len + sizeof(*rsp) + resp_len;
dma_addr_t cmd_phys;
cmd = kzalloc(PAGE_ALIGN(alloc_len), GFP_KERNEL);
if (!cmd)
return -ENOMEM;
cmd->len = cpu_to_le32(alloc_len);
cmd->buf_offset = cpu_to_le32(sizeof(*cmd));
cmd->resp_hdr_offset = cpu_to_le32(sizeof(*cmd) + cmd_len);
cmd->id = cpu_to_le32((svc_id << 10) | cmd_id);
if (cmd_buf)
memcpy(qcom_scm_get_command_buffer(cmd), cmd_buf, cmd_len);
rsp = qcom_scm_command_to_response(cmd);
cmd_phys = dma_map_single(dev, cmd, alloc_len, DMA_TO_DEVICE);
if (dma_mapping_error(dev, cmd_phys)) {
kfree(cmd);
return -ENOMEM;
}
mutex_lock(&qcom_scm_lock);
ret = smc(cmd_phys);
if (ret < 0)
ret = qcom_scm_remap_error(ret);
mutex_unlock(&qcom_scm_lock);
if (ret)
goto out;
do {
dma_sync_single_for_cpu(dev, cmd_phys + sizeof(*cmd) + cmd_len,
sizeof(*rsp), DMA_FROM_DEVICE);
} while (!rsp->is_complete);
if (resp_buf) {
dma_sync_single_for_cpu(dev, cmd_phys + sizeof(*cmd) + cmd_len +
le32_to_cpu(rsp->buf_offset),
resp_len, DMA_FROM_DEVICE);
memcpy(resp_buf, qcom_scm_get_response_buffer(rsp),
resp_len);
}
out:
dma_unmap_single(dev, cmd_phys, alloc_len, DMA_TO_DEVICE);
kfree(cmd);
return ret;
}
#define SCM_CLASS_REGISTER (0x2 << 8)
#define SCM_MASK_IRQS BIT(5)
#define SCM_ATOMIC(svc, cmd, n) (((((svc) << 10)|((cmd) & 0x3ff)) << 12) | \
SCM_CLASS_REGISTER | \
SCM_MASK_IRQS | \
(n & 0xf))
/**
* qcom_scm_call_atomic1() - Send an atomic SCM command with one argument
* @svc_id: service identifier
* @cmd_id: command identifier
* @arg1: first argument
*
* This shall only be used with commands that are guaranteed to be
* uninterruptable, atomic and SMP safe.
*/
static s32 qcom_scm_call_atomic1(u32 svc, u32 cmd, u32 arg1)
{
int context_id;
register u32 r0 asm("r0") = SCM_ATOMIC(svc, cmd, 1);
register u32 r1 asm("r1") = (u32)&context_id;
register u32 r2 asm("r2") = arg1;
asm volatile(
__asmeq("%0", "r0")
__asmeq("%1", "r0")
__asmeq("%2", "r1")
__asmeq("%3", "r2")
#ifdef REQUIRES_SEC
".arch_extension sec\n"
#endif
"smc #0 @ switch to secure world\n"
: "=r" (r0)
: "r" (r0), "r" (r1), "r" (r2)
: "r3", "r12");
return r0;
}
/**
* qcom_scm_call_atomic2() - Send an atomic SCM command with two arguments
* @svc_id: service identifier
* @cmd_id: command identifier
* @arg1: first argument
* @arg2: second argument
*
* This shall only be used with commands that are guaranteed to be
* uninterruptable, atomic and SMP safe.
*/
static s32 qcom_scm_call_atomic2(u32 svc, u32 cmd, u32 arg1, u32 arg2)
{
int context_id;
register u32 r0 asm("r0") = SCM_ATOMIC(svc, cmd, 2);
register u32 r1 asm("r1") = (u32)&context_id;
register u32 r2 asm("r2") = arg1;
register u32 r3 asm("r3") = arg2;
asm volatile(
__asmeq("%0", "r0")
__asmeq("%1", "r0")
__asmeq("%2", "r1")
__asmeq("%3", "r2")
__asmeq("%4", "r3")
#ifdef REQUIRES_SEC
".arch_extension sec\n"
#endif
"smc #0 @ switch to secure world\n"
: "=r" (r0)
: "r" (r0), "r" (r1), "r" (r2), "r" (r3)
: "r12");
return r0;
}
u32 qcom_scm_get_version(void)
{
int context_id;
static u32 version = -1;
register u32 r0 asm("r0");
register u32 r1 asm("r1");
if (version != -1)
return version;
mutex_lock(&qcom_scm_lock);
r0 = 0x1 << 8;
r1 = (u32)&context_id;
do {
asm volatile(
__asmeq("%0", "r0")
__asmeq("%1", "r1")
__asmeq("%2", "r0")
__asmeq("%3", "r1")
#ifdef REQUIRES_SEC
".arch_extension sec\n"
#endif
"smc #0 @ switch to secure world\n"
: "=r" (r0), "=r" (r1)
: "r" (r0), "r" (r1)
: "r2", "r3", "r12");
} while (r0 == QCOM_SCM_INTERRUPTED);
version = r1;
mutex_unlock(&qcom_scm_lock);
return version;
}
EXPORT_SYMBOL(qcom_scm_get_version);
/**
* qcom_scm_set_cold_boot_addr() - Set the cold boot address for cpus
* @entry: Entry point function for the cpus
* @cpus: The cpumask of cpus that will use the entry point
*
* Set the cold boot address of the cpus. Any cpu outside the supported
* range would be removed from the cpu present mask.
*/
int __qcom_scm_set_cold_boot_addr(void *entry, const cpumask_t *cpus)
{
int flags = 0;
int cpu;
int scm_cb_flags[] = {
QCOM_SCM_FLAG_COLDBOOT_CPU0,
QCOM_SCM_FLAG_COLDBOOT_CPU1,
QCOM_SCM_FLAG_COLDBOOT_CPU2,
QCOM_SCM_FLAG_COLDBOOT_CPU3,
};
if (!cpus || (cpus && cpumask_empty(cpus)))
return -EINVAL;
for_each_cpu(cpu, cpus) {
if (cpu < ARRAY_SIZE(scm_cb_flags))
flags |= scm_cb_flags[cpu];
else
set_cpu_present(cpu, false);
}
return qcom_scm_call_atomic2(QCOM_SCM_SVC_BOOT, QCOM_SCM_BOOT_ADDR,
flags, virt_to_phys(entry));
}
/**
* qcom_scm_set_warm_boot_addr() - Set the warm boot address for cpus
* @entry: Entry point function for the cpus
* @cpus: The cpumask of cpus that will use the entry point
*
* Set the Linux entry point for the SCM to transfer control to when coming
* out of a power down. CPU power down may be executed on cpuidle or hotplug.
*/
int __qcom_scm_set_warm_boot_addr(struct device *dev, void *entry,
const cpumask_t *cpus)
{
int ret;
int flags = 0;
int cpu;
struct {
__le32 flags;
__le32 addr;
} cmd;
/*
* Reassign only if we are switching from hotplug entry point
* to cpuidle entry point or vice versa.
*/
for_each_cpu(cpu, cpus) {
if (entry == qcom_scm_wb[cpu].entry)
continue;
flags |= qcom_scm_wb[cpu].flag;
}
/* No change in entry function */
if (!flags)
return 0;
cmd.addr = cpu_to_le32(virt_to_phys(entry));
cmd.flags = cpu_to_le32(flags);
ret = qcom_scm_call(dev, QCOM_SCM_SVC_BOOT, QCOM_SCM_BOOT_ADDR,
&cmd, sizeof(cmd), NULL, 0);
if (!ret) {
for_each_cpu(cpu, cpus)
qcom_scm_wb[cpu].entry = entry;
}
return ret;
}
/**
* qcom_scm_cpu_power_down() - Power down the cpu
* @flags - Flags to flush cache
*
* This is an end point to power down cpu. If there was a pending interrupt,
* the control would return from this function, otherwise, the cpu jumps to the
* warm boot entry point set for this cpu upon reset.
*/
void __qcom_scm_cpu_power_down(u32 flags)
{
qcom_scm_call_atomic1(QCOM_SCM_SVC_BOOT, QCOM_SCM_CMD_TERMINATE_PC,
flags & QCOM_SCM_FLUSH_FLAG_MASK);
}
int __qcom_scm_is_call_available(struct device *dev, u32 svc_id, u32 cmd_id)
{
int ret;
__le32 svc_cmd = cpu_to_le32((svc_id << 10) | cmd_id);
__le32 ret_val = 0;
ret = qcom_scm_call(dev, QCOM_SCM_SVC_INFO, QCOM_IS_CALL_AVAIL_CMD,
&svc_cmd, sizeof(svc_cmd), &ret_val,
sizeof(ret_val));
if (ret)
return ret;
return le32_to_cpu(ret_val);
}
int __qcom_scm_hdcp_req(struct device *dev, struct qcom_scm_hdcp_req *req,
u32 req_cnt, u32 *resp)
{
if (req_cnt > QCOM_SCM_HDCP_MAX_REQ_CNT)
return -ERANGE;
return qcom_scm_call(dev, QCOM_SCM_SVC_HDCP, QCOM_SCM_CMD_HDCP,
req, req_cnt * sizeof(*req), resp, sizeof(*resp));
}
int __qcom_scm_ocmem_lock(struct device *dev, u32 id, u32 offset, u32 size,
u32 mode)
{
struct ocmem_tz_lock {
__le32 id;
__le32 offset;
__le32 size;
__le32 mode;
} request;
request.id = cpu_to_le32(id);
request.offset = cpu_to_le32(offset);
request.size = cpu_to_le32(size);
request.mode = cpu_to_le32(mode);
return qcom_scm_call(dev, QCOM_SCM_OCMEM_SVC, QCOM_SCM_OCMEM_LOCK_CMD,
&request, sizeof(request), NULL, 0);
}
int __qcom_scm_ocmem_unlock(struct device *dev, u32 id, u32 offset, u32 size)
{
struct ocmem_tz_unlock {
__le32 id;
__le32 offset;
__le32 size;
} request;
request.id = cpu_to_le32(id);
request.offset = cpu_to_le32(offset);
request.size = cpu_to_le32(size);
return qcom_scm_call(dev, QCOM_SCM_OCMEM_SVC, QCOM_SCM_OCMEM_UNLOCK_CMD,
&request, sizeof(request), NULL, 0);
}
void __qcom_scm_init(void)
{
}
bool __qcom_scm_pas_supported(struct device *dev, u32 peripheral)
{
__le32 out;
__le32 in;
int ret;
in = cpu_to_le32(peripheral);
ret = qcom_scm_call(dev, QCOM_SCM_SVC_PIL,
QCOM_SCM_PAS_IS_SUPPORTED_CMD,
&in, sizeof(in),
&out, sizeof(out));
return ret ? false : !!out;
}
int __qcom_scm_pas_init_image(struct device *dev, u32 peripheral,
dma_addr_t metadata_phys)
{
__le32 scm_ret;
int ret;
struct {
__le32 proc;
__le32 image_addr;
} request;
request.proc = cpu_to_le32(peripheral);
request.image_addr = cpu_to_le32(metadata_phys);
ret = qcom_scm_call(dev, QCOM_SCM_SVC_PIL,
QCOM_SCM_PAS_INIT_IMAGE_CMD,
&request, sizeof(request),
&scm_ret, sizeof(scm_ret));
return ret ? : le32_to_cpu(scm_ret);
}
int __qcom_scm_pas_mem_setup(struct device *dev, u32 peripheral,
phys_addr_t addr, phys_addr_t size)
{
__le32 scm_ret;
int ret;
struct {
__le32 proc;
__le32 addr;
__le32 len;
} request;
request.proc = cpu_to_le32(peripheral);
request.addr = cpu_to_le32(addr);
request.len = cpu_to_le32(size);
ret = qcom_scm_call(dev, QCOM_SCM_SVC_PIL,
QCOM_SCM_PAS_MEM_SETUP_CMD,
&request, sizeof(request),
&scm_ret, sizeof(scm_ret));
return ret ? : le32_to_cpu(scm_ret);
}
int __qcom_scm_pas_auth_and_reset(struct device *dev, u32 peripheral)
{
__le32 out;
__le32 in;
int ret;
in = cpu_to_le32(peripheral);
ret = qcom_scm_call(dev, QCOM_SCM_SVC_PIL,
QCOM_SCM_PAS_AUTH_AND_RESET_CMD,
&in, sizeof(in),
&out, sizeof(out));
return ret ? : le32_to_cpu(out);
}
int __qcom_scm_pas_shutdown(struct device *dev, u32 peripheral)
{
__le32 out;
__le32 in;
int ret;
in = cpu_to_le32(peripheral);
ret = qcom_scm_call(dev, QCOM_SCM_SVC_PIL,
QCOM_SCM_PAS_SHUTDOWN_CMD,
&in, sizeof(in),
&out, sizeof(out));
return ret ? : le32_to_cpu(out);
}
int __qcom_scm_pas_mss_reset(struct device *dev, bool reset)
{
__le32 out;
__le32 in = cpu_to_le32(reset);
int ret;
ret = qcom_scm_call(dev, QCOM_SCM_SVC_PIL, QCOM_SCM_PAS_MSS_RESET,
&in, sizeof(in),
&out, sizeof(out));
return ret ? : le32_to_cpu(out);
}
int __qcom_scm_set_dload_mode(struct device *dev, bool enable)
{
return qcom_scm_call_atomic2(QCOM_SCM_SVC_BOOT, QCOM_SCM_SET_DLOAD_MODE,
enable ? QCOM_SCM_SET_DLOAD_MODE : 0, 0);
}
int __qcom_scm_set_remote_state(struct device *dev, u32 state, u32 id)
{
struct {
__le32 state;
__le32 id;
} req;
__le32 scm_ret = 0;
int ret;
req.state = cpu_to_le32(state);
req.id = cpu_to_le32(id);
ret = qcom_scm_call(dev, QCOM_SCM_SVC_BOOT, QCOM_SCM_SET_REMOTE_STATE,
&req, sizeof(req), &scm_ret, sizeof(scm_ret));
return ret ? : le32_to_cpu(scm_ret);
}
int __qcom_scm_assign_mem(struct device *dev, phys_addr_t mem_region,
size_t mem_sz, phys_addr_t src, size_t src_sz,
phys_addr_t dest, size_t dest_sz)
{
return -ENODEV;
}
int __qcom_scm_restore_sec_cfg(struct device *dev, u32 device_id,
u32 spare)
{
struct msm_scm_sec_cfg {
__le32 id;
__le32 ctx_bank_num;
} cfg;
int ret, scm_ret = 0;
cfg.id = cpu_to_le32(device_id);
cfg.ctx_bank_num = cpu_to_le32(spare);
ret = qcom_scm_call(dev, QCOM_SCM_SVC_MP, QCOM_SCM_RESTORE_SEC_CFG,
&cfg, sizeof(cfg), &scm_ret, sizeof(scm_ret));
if (ret || scm_ret)
return ret ? ret : -EINVAL;
return 0;
}
int __qcom_scm_iommu_secure_ptbl_size(struct device *dev, u32 spare,
size_t *size)
{
return -ENODEV;
}
int __qcom_scm_iommu_secure_ptbl_init(struct device *dev, u64 addr, u32 size,
u32 spare)
{
return -ENODEV;
}
int __qcom_scm_io_readl(struct device *dev, phys_addr_t addr,
unsigned int *val)
{
int ret;
ret = qcom_scm_call_atomic1(QCOM_SCM_SVC_IO, QCOM_SCM_IO_READ, addr);
if (ret >= 0)
*val = ret;
return ret < 0 ? ret : 0;
}
int __qcom_scm_io_writel(struct device *dev, phys_addr_t addr, unsigned int val)
{
return qcom_scm_call_atomic2(QCOM_SCM_SVC_IO, QCOM_SCM_IO_WRITE,
addr, val);
}
int __qcom_scm_qsmmu500_wait_safe_toggle(struct device *dev, bool enable)
{
return -ENODEV;
}

Просмотреть файл

@ -1,579 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-only
/* Copyright (c) 2015, The Linux Foundation. All rights reserved.
*/
#include <linux/io.h>
#include <linux/errno.h>
#include <linux/delay.h>
#include <linux/mutex.h>
#include <linux/slab.h>
#include <linux/types.h>
#include <linux/qcom_scm.h>
#include <linux/arm-smccc.h>
#include <linux/dma-mapping.h>
#include "qcom_scm.h"
#define QCOM_SCM_FNID(s, c) ((((s) & 0xFF) << 8) | ((c) & 0xFF))
#define MAX_QCOM_SCM_ARGS 10
#define MAX_QCOM_SCM_RETS 3
enum qcom_scm_arg_types {
QCOM_SCM_VAL,
QCOM_SCM_RO,
QCOM_SCM_RW,
QCOM_SCM_BUFVAL,
};
#define QCOM_SCM_ARGS_IMPL(num, a, b, c, d, e, f, g, h, i, j, ...) (\
(((a) & 0x3) << 4) | \
(((b) & 0x3) << 6) | \
(((c) & 0x3) << 8) | \
(((d) & 0x3) << 10) | \
(((e) & 0x3) << 12) | \
(((f) & 0x3) << 14) | \
(((g) & 0x3) << 16) | \
(((h) & 0x3) << 18) | \
(((i) & 0x3) << 20) | \
(((j) & 0x3) << 22) | \
((num) & 0xf))
#define QCOM_SCM_ARGS(...) QCOM_SCM_ARGS_IMPL(__VA_ARGS__, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0)
/**
* struct qcom_scm_desc
* @arginfo: Metadata describing the arguments in args[]
* @args: The array of arguments for the secure syscall
* @res: The values returned by the secure syscall
*/
struct qcom_scm_desc {
u32 arginfo;
u64 args[MAX_QCOM_SCM_ARGS];
};
static u64 qcom_smccc_convention = -1;
static DEFINE_MUTEX(qcom_scm_lock);
#define QCOM_SCM_EBUSY_WAIT_MS 30
#define QCOM_SCM_EBUSY_MAX_RETRY 20
#define N_EXT_QCOM_SCM_ARGS 7
#define FIRST_EXT_ARG_IDX 3
#define N_REGISTER_ARGS (MAX_QCOM_SCM_ARGS - N_EXT_QCOM_SCM_ARGS + 1)
static void __qcom_scm_call_do(const struct qcom_scm_desc *desc,
struct arm_smccc_res *res, u32 fn_id,
u64 x5, u32 type)
{
u64 cmd;
struct arm_smccc_quirk quirk = { .id = ARM_SMCCC_QUIRK_QCOM_A6 };
cmd = ARM_SMCCC_CALL_VAL(type, qcom_smccc_convention,
ARM_SMCCC_OWNER_SIP, fn_id);
quirk.state.a6 = 0;
do {
arm_smccc_smc_quirk(cmd, desc->arginfo, desc->args[0],
desc->args[1], desc->args[2], x5,
quirk.state.a6, 0, res, &quirk);
if (res->a0 == QCOM_SCM_INTERRUPTED)
cmd = res->a0;
} while (res->a0 == QCOM_SCM_INTERRUPTED);
}
static void qcom_scm_call_do(const struct qcom_scm_desc *desc,
struct arm_smccc_res *res, u32 fn_id,
u64 x5, bool atomic)
{
int retry_count = 0;
if (atomic) {
__qcom_scm_call_do(desc, res, fn_id, x5, ARM_SMCCC_FAST_CALL);
return;
}
do {
mutex_lock(&qcom_scm_lock);
__qcom_scm_call_do(desc, res, fn_id, x5,
ARM_SMCCC_STD_CALL);
mutex_unlock(&qcom_scm_lock);
if (res->a0 == QCOM_SCM_V2_EBUSY) {
if (retry_count++ > QCOM_SCM_EBUSY_MAX_RETRY)
break;
msleep(QCOM_SCM_EBUSY_WAIT_MS);
}
} while (res->a0 == QCOM_SCM_V2_EBUSY);
}
static int ___qcom_scm_call(struct device *dev, u32 svc_id, u32 cmd_id,
const struct qcom_scm_desc *desc,
struct arm_smccc_res *res, bool atomic)
{
int arglen = desc->arginfo & 0xf;
int i;
u32 fn_id = QCOM_SCM_FNID(svc_id, cmd_id);
u64 x5 = desc->args[FIRST_EXT_ARG_IDX];
dma_addr_t args_phys = 0;
void *args_virt = NULL;
size_t alloc_len;
gfp_t flag = atomic ? GFP_ATOMIC : GFP_KERNEL;
if (unlikely(arglen > N_REGISTER_ARGS)) {
alloc_len = N_EXT_QCOM_SCM_ARGS * sizeof(u64);
args_virt = kzalloc(PAGE_ALIGN(alloc_len), flag);
if (!args_virt)
return -ENOMEM;
if (qcom_smccc_convention == ARM_SMCCC_SMC_32) {
__le32 *args = args_virt;
for (i = 0; i < N_EXT_QCOM_SCM_ARGS; i++)
args[i] = cpu_to_le32(desc->args[i +
FIRST_EXT_ARG_IDX]);
} else {
__le64 *args = args_virt;
for (i = 0; i < N_EXT_QCOM_SCM_ARGS; i++)
args[i] = cpu_to_le64(desc->args[i +
FIRST_EXT_ARG_IDX]);
}
args_phys = dma_map_single(dev, args_virt, alloc_len,
DMA_TO_DEVICE);
if (dma_mapping_error(dev, args_phys)) {
kfree(args_virt);
return -ENOMEM;
}
x5 = args_phys;
}
qcom_scm_call_do(desc, res, fn_id, x5, atomic);
if (args_virt) {
dma_unmap_single(dev, args_phys, alloc_len, DMA_TO_DEVICE);
kfree(args_virt);
}
if ((long)res->a0 < 0)
return qcom_scm_remap_error(res->a0);
return 0;
}
/**
* qcom_scm_call() - Invoke a syscall in the secure world
* @dev: device
* @svc_id: service identifier
* @cmd_id: command identifier
* @desc: Descriptor structure containing arguments and return values
*
* Sends a command to the SCM and waits for the command to finish processing.
* This should *only* be called in pre-emptible context.
*/
static int qcom_scm_call(struct device *dev, u32 svc_id, u32 cmd_id,
const struct qcom_scm_desc *desc,
struct arm_smccc_res *res)
{
might_sleep();
return ___qcom_scm_call(dev, svc_id, cmd_id, desc, res, false);
}
/**
* qcom_scm_call_atomic() - atomic variation of qcom_scm_call()
* @dev: device
* @svc_id: service identifier
* @cmd_id: command identifier
* @desc: Descriptor structure containing arguments and return values
* @res: Structure containing results from SMC/HVC call
*
* Sends a command to the SCM and waits for the command to finish processing.
* This can be called in atomic context.
*/
static int qcom_scm_call_atomic(struct device *dev, u32 svc_id, u32 cmd_id,
const struct qcom_scm_desc *desc,
struct arm_smccc_res *res)
{
return ___qcom_scm_call(dev, svc_id, cmd_id, desc, res, true);
}
/**
* qcom_scm_set_cold_boot_addr() - Set the cold boot address for cpus
* @entry: Entry point function for the cpus
* @cpus: The cpumask of cpus that will use the entry point
*
* Set the cold boot address of the cpus. Any cpu outside the supported
* range would be removed from the cpu present mask.
*/
int __qcom_scm_set_cold_boot_addr(void *entry, const cpumask_t *cpus)
{
return -ENOTSUPP;
}
/**
* qcom_scm_set_warm_boot_addr() - Set the warm boot address for cpus
* @dev: Device pointer
* @entry: Entry point function for the cpus
* @cpus: The cpumask of cpus that will use the entry point
*
* Set the Linux entry point for the SCM to transfer control to when coming
* out of a power down. CPU power down may be executed on cpuidle or hotplug.
*/
int __qcom_scm_set_warm_boot_addr(struct device *dev, void *entry,
const cpumask_t *cpus)
{
return -ENOTSUPP;
}
/**
* qcom_scm_cpu_power_down() - Power down the cpu
* @flags - Flags to flush cache
*
* This is an end point to power down cpu. If there was a pending interrupt,
* the control would return from this function, otherwise, the cpu jumps to the
* warm boot entry point set for this cpu upon reset.
*/
void __qcom_scm_cpu_power_down(u32 flags)
{
}
int __qcom_scm_is_call_available(struct device *dev, u32 svc_id, u32 cmd_id)
{
int ret;
struct qcom_scm_desc desc = {0};
struct arm_smccc_res res;
desc.arginfo = QCOM_SCM_ARGS(1);
desc.args[0] = QCOM_SCM_FNID(svc_id, cmd_id) |
(ARM_SMCCC_OWNER_SIP << ARM_SMCCC_OWNER_SHIFT);
ret = qcom_scm_call(dev, QCOM_SCM_SVC_INFO, QCOM_IS_CALL_AVAIL_CMD,
&desc, &res);
return ret ? : res.a1;
}
int __qcom_scm_hdcp_req(struct device *dev, struct qcom_scm_hdcp_req *req,
u32 req_cnt, u32 *resp)
{
int ret;
struct qcom_scm_desc desc = {0};
struct arm_smccc_res res;
if (req_cnt > QCOM_SCM_HDCP_MAX_REQ_CNT)
return -ERANGE;
desc.args[0] = req[0].addr;
desc.args[1] = req[0].val;
desc.args[2] = req[1].addr;
desc.args[3] = req[1].val;
desc.args[4] = req[2].addr;
desc.args[5] = req[2].val;
desc.args[6] = req[3].addr;
desc.args[7] = req[3].val;
desc.args[8] = req[4].addr;
desc.args[9] = req[4].val;
desc.arginfo = QCOM_SCM_ARGS(10);
ret = qcom_scm_call(dev, QCOM_SCM_SVC_HDCP, QCOM_SCM_CMD_HDCP, &desc,
&res);
*resp = res.a1;
return ret;
}
int __qcom_scm_ocmem_lock(struct device *dev, uint32_t id, uint32_t offset,
uint32_t size, uint32_t mode)
{
return -ENOTSUPP;
}
int __qcom_scm_ocmem_unlock(struct device *dev, uint32_t id, uint32_t offset,
uint32_t size)
{
return -ENOTSUPP;
}
void __qcom_scm_init(void)
{
u64 cmd;
struct arm_smccc_res res;
u32 function = QCOM_SCM_FNID(QCOM_SCM_SVC_INFO, QCOM_IS_CALL_AVAIL_CMD);
/* First try a SMC64 call */
cmd = ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, ARM_SMCCC_SMC_64,
ARM_SMCCC_OWNER_SIP, function);
arm_smccc_smc(cmd, QCOM_SCM_ARGS(1), cmd & (~BIT(ARM_SMCCC_TYPE_SHIFT)),
0, 0, 0, 0, 0, &res);
if (!res.a0 && res.a1)
qcom_smccc_convention = ARM_SMCCC_SMC_64;
else
qcom_smccc_convention = ARM_SMCCC_SMC_32;
}
bool __qcom_scm_pas_supported(struct device *dev, u32 peripheral)
{
int ret;
struct qcom_scm_desc desc = {0};
struct arm_smccc_res res;
desc.args[0] = peripheral;
desc.arginfo = QCOM_SCM_ARGS(1);
ret = qcom_scm_call(dev, QCOM_SCM_SVC_PIL,
QCOM_SCM_PAS_IS_SUPPORTED_CMD,
&desc, &res);
return ret ? false : !!res.a1;
}
int __qcom_scm_pas_init_image(struct device *dev, u32 peripheral,
dma_addr_t metadata_phys)
{
int ret;
struct qcom_scm_desc desc = {0};
struct arm_smccc_res res;
desc.args[0] = peripheral;
desc.args[1] = metadata_phys;
desc.arginfo = QCOM_SCM_ARGS(2, QCOM_SCM_VAL, QCOM_SCM_RW);
ret = qcom_scm_call(dev, QCOM_SCM_SVC_PIL, QCOM_SCM_PAS_INIT_IMAGE_CMD,
&desc, &res);
return ret ? : res.a1;
}
int __qcom_scm_pas_mem_setup(struct device *dev, u32 peripheral,
phys_addr_t addr, phys_addr_t size)
{
int ret;
struct qcom_scm_desc desc = {0};
struct arm_smccc_res res;
desc.args[0] = peripheral;
desc.args[1] = addr;
desc.args[2] = size;
desc.arginfo = QCOM_SCM_ARGS(3);
ret = qcom_scm_call(dev, QCOM_SCM_SVC_PIL, QCOM_SCM_PAS_MEM_SETUP_CMD,
&desc, &res);
return ret ? : res.a1;
}
int __qcom_scm_pas_auth_and_reset(struct device *dev, u32 peripheral)
{
int ret;
struct qcom_scm_desc desc = {0};
struct arm_smccc_res res;
desc.args[0] = peripheral;
desc.arginfo = QCOM_SCM_ARGS(1);
ret = qcom_scm_call(dev, QCOM_SCM_SVC_PIL,
QCOM_SCM_PAS_AUTH_AND_RESET_CMD,
&desc, &res);
return ret ? : res.a1;
}
int __qcom_scm_pas_shutdown(struct device *dev, u32 peripheral)
{
int ret;
struct qcom_scm_desc desc = {0};
struct arm_smccc_res res;
desc.args[0] = peripheral;
desc.arginfo = QCOM_SCM_ARGS(1);
ret = qcom_scm_call(dev, QCOM_SCM_SVC_PIL, QCOM_SCM_PAS_SHUTDOWN_CMD,
&desc, &res);
return ret ? : res.a1;
}
int __qcom_scm_pas_mss_reset(struct device *dev, bool reset)
{
struct qcom_scm_desc desc = {0};
struct arm_smccc_res res;
int ret;
desc.args[0] = reset;
desc.args[1] = 0;
desc.arginfo = QCOM_SCM_ARGS(2);
ret = qcom_scm_call(dev, QCOM_SCM_SVC_PIL, QCOM_SCM_PAS_MSS_RESET, &desc,
&res);
return ret ? : res.a1;
}
int __qcom_scm_set_remote_state(struct device *dev, u32 state, u32 id)
{
struct qcom_scm_desc desc = {0};
struct arm_smccc_res res;
int ret;
desc.args[0] = state;
desc.args[1] = id;
desc.arginfo = QCOM_SCM_ARGS(2);
ret = qcom_scm_call(dev, QCOM_SCM_SVC_BOOT, QCOM_SCM_SET_REMOTE_STATE,
&desc, &res);
return ret ? : res.a1;
}
int __qcom_scm_assign_mem(struct device *dev, phys_addr_t mem_region,
size_t mem_sz, phys_addr_t src, size_t src_sz,
phys_addr_t dest, size_t dest_sz)
{
int ret;
struct qcom_scm_desc desc = {0};
struct arm_smccc_res res;
desc.args[0] = mem_region;
desc.args[1] = mem_sz;
desc.args[2] = src;
desc.args[3] = src_sz;
desc.args[4] = dest;
desc.args[5] = dest_sz;
desc.args[6] = 0;
desc.arginfo = QCOM_SCM_ARGS(7, QCOM_SCM_RO, QCOM_SCM_VAL,
QCOM_SCM_RO, QCOM_SCM_VAL, QCOM_SCM_RO,
QCOM_SCM_VAL, QCOM_SCM_VAL);
ret = qcom_scm_call(dev, QCOM_SCM_SVC_MP,
QCOM_MEM_PROT_ASSIGN_ID,
&desc, &res);
return ret ? : res.a1;
}
int __qcom_scm_restore_sec_cfg(struct device *dev, u32 device_id, u32 spare)
{
struct qcom_scm_desc desc = {0};
struct arm_smccc_res res;
int ret;
desc.args[0] = device_id;
desc.args[1] = spare;
desc.arginfo = QCOM_SCM_ARGS(2);
ret = qcom_scm_call(dev, QCOM_SCM_SVC_MP, QCOM_SCM_RESTORE_SEC_CFG,
&desc, &res);
return ret ? : res.a1;
}
int __qcom_scm_iommu_secure_ptbl_size(struct device *dev, u32 spare,
size_t *size)
{
struct qcom_scm_desc desc = {0};
struct arm_smccc_res res;
int ret;
desc.args[0] = spare;
desc.arginfo = QCOM_SCM_ARGS(1);
ret = qcom_scm_call(dev, QCOM_SCM_SVC_MP,
QCOM_SCM_IOMMU_SECURE_PTBL_SIZE, &desc, &res);
if (size)
*size = res.a1;
return ret ? : res.a2;
}
int __qcom_scm_iommu_secure_ptbl_init(struct device *dev, u64 addr, u32 size,
u32 spare)
{
struct qcom_scm_desc desc = {0};
struct arm_smccc_res res;
int ret;
desc.args[0] = addr;
desc.args[1] = size;
desc.args[2] = spare;
desc.arginfo = QCOM_SCM_ARGS(3, QCOM_SCM_RW, QCOM_SCM_VAL,
QCOM_SCM_VAL);
ret = qcom_scm_call(dev, QCOM_SCM_SVC_MP,
QCOM_SCM_IOMMU_SECURE_PTBL_INIT, &desc, &res);
/* the pg table has been initialized already, ignore the error */
if (ret == -EPERM)
ret = 0;
return ret;
}
int __qcom_scm_set_dload_mode(struct device *dev, bool enable)
{
struct qcom_scm_desc desc = {0};
struct arm_smccc_res res;
desc.args[0] = QCOM_SCM_SET_DLOAD_MODE;
desc.args[1] = enable ? QCOM_SCM_SET_DLOAD_MODE : 0;
desc.arginfo = QCOM_SCM_ARGS(2);
return qcom_scm_call(dev, QCOM_SCM_SVC_BOOT, QCOM_SCM_SET_DLOAD_MODE,
&desc, &res);
}
int __qcom_scm_io_readl(struct device *dev, phys_addr_t addr,
unsigned int *val)
{
struct qcom_scm_desc desc = {0};
struct arm_smccc_res res;
int ret;
desc.args[0] = addr;
desc.arginfo = QCOM_SCM_ARGS(1);
ret = qcom_scm_call(dev, QCOM_SCM_SVC_IO, QCOM_SCM_IO_READ,
&desc, &res);
if (ret >= 0)
*val = res.a1;
return ret < 0 ? ret : 0;
}
int __qcom_scm_io_writel(struct device *dev, phys_addr_t addr, unsigned int val)
{
struct qcom_scm_desc desc = {0};
struct arm_smccc_res res;
desc.args[0] = addr;
desc.args[1] = val;
desc.arginfo = QCOM_SCM_ARGS(2);
return qcom_scm_call(dev, QCOM_SCM_SVC_IO, QCOM_SCM_IO_WRITE,
&desc, &res);
}
int __qcom_scm_qsmmu500_wait_safe_toggle(struct device *dev, bool en)
{
struct qcom_scm_desc desc = {0};
struct arm_smccc_res res;
desc.args[0] = QCOM_SCM_CONFIG_ERRATA1_CLIENT_ALL;
desc.args[1] = en;
desc.arginfo = QCOM_SCM_ARGS(2);
return qcom_scm_call_atomic(dev, QCOM_SCM_SVC_SMMU_PROGRAM,
QCOM_SCM_CONFIG_ERRATA1, &desc, &res);
}

Просмотреть файл

@ -0,0 +1,242 @@
// SPDX-License-Identifier: GPL-2.0-only
/* Copyright (c) 2010,2015,2019 The Linux Foundation. All rights reserved.
* Copyright (C) 2015 Linaro Ltd.
*/
#include <linux/slab.h>
#include <linux/io.h>
#include <linux/module.h>
#include <linux/mutex.h>
#include <linux/errno.h>
#include <linux/err.h>
#include <linux/qcom_scm.h>
#include <linux/arm-smccc.h>
#include <linux/dma-mapping.h>
#include "qcom_scm.h"
static DEFINE_MUTEX(qcom_scm_lock);
/**
* struct arm_smccc_args
* @args: The array of values used in registers in smc instruction
*/
struct arm_smccc_args {
unsigned long args[8];
};
/**
* struct scm_legacy_command - one SCM command buffer
* @len: total available memory for command and response
* @buf_offset: start of command buffer
* @resp_hdr_offset: start of response buffer
* @id: command to be executed
* @buf: buffer returned from scm_legacy_get_command_buffer()
*
* An SCM command is laid out in memory as follows:
*
* ------------------- <--- struct scm_legacy_command
* | command header |
* ------------------- <--- scm_legacy_get_command_buffer()
* | command buffer |
* ------------------- <--- struct scm_legacy_response and
* | response header | scm_legacy_command_to_response()
* ------------------- <--- scm_legacy_get_response_buffer()
* | response buffer |
* -------------------
*
* There can be arbitrary padding between the headers and buffers so
* you should always use the appropriate scm_legacy_get_*_buffer() routines
* to access the buffers in a safe manner.
*/
struct scm_legacy_command {
__le32 len;
__le32 buf_offset;
__le32 resp_hdr_offset;
__le32 id;
__le32 buf[0];
};
/**
* struct scm_legacy_response - one SCM response buffer
* @len: total available memory for response
* @buf_offset: start of response data relative to start of scm_legacy_response
* @is_complete: indicates if the command has finished processing
*/
struct scm_legacy_response {
__le32 len;
__le32 buf_offset;
__le32 is_complete;
};
/**
* scm_legacy_command_to_response() - Get a pointer to a scm_legacy_response
* @cmd: command
*
* Returns a pointer to a response for a command.
*/
static inline struct scm_legacy_response *scm_legacy_command_to_response(
const struct scm_legacy_command *cmd)
{
return (void *)cmd + le32_to_cpu(cmd->resp_hdr_offset);
}
/**
* scm_legacy_get_command_buffer() - Get a pointer to a command buffer
* @cmd: command
*
* Returns a pointer to the command buffer of a command.
*/
static inline void *scm_legacy_get_command_buffer(
const struct scm_legacy_command *cmd)
{
return (void *)cmd->buf;
}
/**
* scm_legacy_get_response_buffer() - Get a pointer to a response buffer
* @rsp: response
*
* Returns a pointer to a response buffer of a response.
*/
static inline void *scm_legacy_get_response_buffer(
const struct scm_legacy_response *rsp)
{
return (void *)rsp + le32_to_cpu(rsp->buf_offset);
}
static void __scm_legacy_do(const struct arm_smccc_args *smc,
struct arm_smccc_res *res)
{
do {
arm_smccc_smc(smc->args[0], smc->args[1], smc->args[2],
smc->args[3], smc->args[4], smc->args[5],
smc->args[6], smc->args[7], res);
} while (res->a0 == QCOM_SCM_INTERRUPTED);
}
/**
* qcom_scm_call() - Sends a command to the SCM and waits for the command to
* finish processing.
*
* A note on cache maintenance:
* Note that any buffers that are expected to be accessed by the secure world
* must be flushed before invoking qcom_scm_call and invalidated in the cache
* immediately after qcom_scm_call returns. Cache maintenance on the command
* and response buffers is taken care of by qcom_scm_call; however, callers are
* responsible for any other cached buffers passed over to the secure world.
*/
int scm_legacy_call(struct device *dev, const struct qcom_scm_desc *desc,
struct qcom_scm_res *res)
{
u8 arglen = desc->arginfo & 0xf;
int ret = 0, context_id;
unsigned int i;
struct scm_legacy_command *cmd;
struct scm_legacy_response *rsp;
struct arm_smccc_args smc = {0};
struct arm_smccc_res smc_res;
const size_t cmd_len = arglen * sizeof(__le32);
const size_t resp_len = MAX_QCOM_SCM_RETS * sizeof(__le32);
size_t alloc_len = sizeof(*cmd) + cmd_len + sizeof(*rsp) + resp_len;
dma_addr_t cmd_phys;
__le32 *arg_buf;
const __le32 *res_buf;
cmd = kzalloc(PAGE_ALIGN(alloc_len), GFP_KERNEL);
if (!cmd)
return -ENOMEM;
cmd->len = cpu_to_le32(alloc_len);
cmd->buf_offset = cpu_to_le32(sizeof(*cmd));
cmd->resp_hdr_offset = cpu_to_le32(sizeof(*cmd) + cmd_len);
cmd->id = cpu_to_le32(SCM_LEGACY_FNID(desc->svc, desc->cmd));
arg_buf = scm_legacy_get_command_buffer(cmd);
for (i = 0; i < arglen; i++)
arg_buf[i] = cpu_to_le32(desc->args[i]);
rsp = scm_legacy_command_to_response(cmd);
cmd_phys = dma_map_single(dev, cmd, alloc_len, DMA_TO_DEVICE);
if (dma_mapping_error(dev, cmd_phys)) {
kfree(cmd);
return -ENOMEM;
}
smc.args[0] = 1;
smc.args[1] = (unsigned long)&context_id;
smc.args[2] = cmd_phys;
mutex_lock(&qcom_scm_lock);
__scm_legacy_do(&smc, &smc_res);
if (smc_res.a0)
ret = qcom_scm_remap_error(smc_res.a0);
mutex_unlock(&qcom_scm_lock);
if (ret)
goto out;
do {
dma_sync_single_for_cpu(dev, cmd_phys + sizeof(*cmd) + cmd_len,
sizeof(*rsp), DMA_FROM_DEVICE);
} while (!rsp->is_complete);
dma_sync_single_for_cpu(dev, cmd_phys + sizeof(*cmd) + cmd_len +
le32_to_cpu(rsp->buf_offset),
resp_len, DMA_FROM_DEVICE);
if (res) {
res_buf = scm_legacy_get_response_buffer(rsp);
for (i = 0; i < MAX_QCOM_SCM_RETS; i++)
res->result[i] = le32_to_cpu(res_buf[i]);
}
out:
dma_unmap_single(dev, cmd_phys, alloc_len, DMA_TO_DEVICE);
kfree(cmd);
return ret;
}
#define SCM_LEGACY_ATOMIC_N_REG_ARGS 5
#define SCM_LEGACY_ATOMIC_FIRST_REG_IDX 2
#define SCM_LEGACY_CLASS_REGISTER (0x2 << 8)
#define SCM_LEGACY_MASK_IRQS BIT(5)
#define SCM_LEGACY_ATOMIC_ID(svc, cmd, n) \
((SCM_LEGACY_FNID(svc, cmd) << 12) | \
SCM_LEGACY_CLASS_REGISTER | \
SCM_LEGACY_MASK_IRQS | \
(n & 0xf))
/**
* qcom_scm_call_atomic() - Send an atomic SCM command with up to 5 arguments
* and 3 return values
* @desc: SCM call descriptor containing arguments
* @res: SCM call return values
*
* This shall only be used with commands that are guaranteed to be
* uninterruptable, atomic and SMP safe.
*/
int scm_legacy_call_atomic(struct device *unused,
const struct qcom_scm_desc *desc,
struct qcom_scm_res *res)
{
int context_id;
struct arm_smccc_res smc_res;
size_t arglen = desc->arginfo & 0xf;
BUG_ON(arglen > SCM_LEGACY_ATOMIC_N_REG_ARGS);
arm_smccc_smc(SCM_LEGACY_ATOMIC_ID(desc->svc, desc->cmd, arglen),
(unsigned long)&context_id,
desc->args[0], desc->args[1], desc->args[2],
desc->args[3], desc->args[4], 0, &smc_res);
if (res) {
res->result[0] = smc_res.a1;
res->result[1] = smc_res.a2;
res->result[2] = smc_res.a3;
}
return smc_res.a0;
}

Просмотреть файл

@ -0,0 +1,151 @@
// SPDX-License-Identifier: GPL-2.0-only
/* Copyright (c) 2015,2019 The Linux Foundation. All rights reserved.
*/
#include <linux/io.h>
#include <linux/errno.h>
#include <linux/delay.h>
#include <linux/mutex.h>
#include <linux/slab.h>
#include <linux/types.h>
#include <linux/qcom_scm.h>
#include <linux/arm-smccc.h>
#include <linux/dma-mapping.h>
#include "qcom_scm.h"
/**
* struct arm_smccc_args
* @args: The array of values used in registers in smc instruction
*/
struct arm_smccc_args {
unsigned long args[8];
};
static DEFINE_MUTEX(qcom_scm_lock);
#define QCOM_SCM_EBUSY_WAIT_MS 30
#define QCOM_SCM_EBUSY_MAX_RETRY 20
#define SCM_SMC_N_REG_ARGS 4
#define SCM_SMC_FIRST_EXT_IDX (SCM_SMC_N_REG_ARGS - 1)
#define SCM_SMC_N_EXT_ARGS (MAX_QCOM_SCM_ARGS - SCM_SMC_N_REG_ARGS + 1)
#define SCM_SMC_FIRST_REG_IDX 2
#define SCM_SMC_LAST_REG_IDX (SCM_SMC_FIRST_REG_IDX + SCM_SMC_N_REG_ARGS - 1)
static void __scm_smc_do_quirk(const struct arm_smccc_args *smc,
struct arm_smccc_res *res)
{
unsigned long a0 = smc->args[0];
struct arm_smccc_quirk quirk = { .id = ARM_SMCCC_QUIRK_QCOM_A6 };
quirk.state.a6 = 0;
do {
arm_smccc_smc_quirk(a0, smc->args[1], smc->args[2],
smc->args[3], smc->args[4], smc->args[5],
quirk.state.a6, smc->args[7], res, &quirk);
if (res->a0 == QCOM_SCM_INTERRUPTED)
a0 = res->a0;
} while (res->a0 == QCOM_SCM_INTERRUPTED);
}
static void __scm_smc_do(const struct arm_smccc_args *smc,
struct arm_smccc_res *res, bool atomic)
{
int retry_count = 0;
if (atomic) {
__scm_smc_do_quirk(smc, res);
return;
}
do {
mutex_lock(&qcom_scm_lock);
__scm_smc_do_quirk(smc, res);
mutex_unlock(&qcom_scm_lock);
if (res->a0 == QCOM_SCM_V2_EBUSY) {
if (retry_count++ > QCOM_SCM_EBUSY_MAX_RETRY)
break;
msleep(QCOM_SCM_EBUSY_WAIT_MS);
}
} while (res->a0 == QCOM_SCM_V2_EBUSY);
}
int scm_smc_call(struct device *dev, const struct qcom_scm_desc *desc,
struct qcom_scm_res *res, bool atomic)
{
int arglen = desc->arginfo & 0xf;
int i;
dma_addr_t args_phys = 0;
void *args_virt = NULL;
size_t alloc_len;
gfp_t flag = atomic ? GFP_ATOMIC : GFP_KERNEL;
u32 smccc_call_type = atomic ? ARM_SMCCC_FAST_CALL : ARM_SMCCC_STD_CALL;
u32 qcom_smccc_convention =
(qcom_scm_convention == SMC_CONVENTION_ARM_32) ?
ARM_SMCCC_SMC_32 : ARM_SMCCC_SMC_64;
struct arm_smccc_res smc_res;
struct arm_smccc_args smc = {0};
smc.args[0] = ARM_SMCCC_CALL_VAL(
smccc_call_type,
qcom_smccc_convention,
desc->owner,
SCM_SMC_FNID(desc->svc, desc->cmd));
smc.args[1] = desc->arginfo;
for (i = 0; i < SCM_SMC_N_REG_ARGS; i++)
smc.args[i + SCM_SMC_FIRST_REG_IDX] = desc->args[i];
if (unlikely(arglen > SCM_SMC_N_REG_ARGS)) {
alloc_len = SCM_SMC_N_EXT_ARGS * sizeof(u64);
args_virt = kzalloc(PAGE_ALIGN(alloc_len), flag);
if (!args_virt)
return -ENOMEM;
if (qcom_smccc_convention == ARM_SMCCC_SMC_32) {
__le32 *args = args_virt;
for (i = 0; i < SCM_SMC_N_EXT_ARGS; i++)
args[i] = cpu_to_le32(desc->args[i +
SCM_SMC_FIRST_EXT_IDX]);
} else {
__le64 *args = args_virt;
for (i = 0; i < SCM_SMC_N_EXT_ARGS; i++)
args[i] = cpu_to_le64(desc->args[i +
SCM_SMC_FIRST_EXT_IDX]);
}
args_phys = dma_map_single(dev, args_virt, alloc_len,
DMA_TO_DEVICE);
if (dma_mapping_error(dev, args_phys)) {
kfree(args_virt);
return -ENOMEM;
}
smc.args[SCM_SMC_LAST_REG_IDX] = args_phys;
}
__scm_smc_do(&smc, &smc_res, atomic);
if (args_virt) {
dma_unmap_single(dev, args_phys, alloc_len, DMA_TO_DEVICE);
kfree(args_virt);
}
if (res) {
res->result[0] = smc_res.a1;
res->result[1] = smc_res.a2;
res->result[2] = smc_res.a3;
}
return (long)smc_res.a0 ? qcom_scm_remap_error(smc_res.a0) : 0;
}

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -1,72 +1,117 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/* Copyright (c) 2010-2015, The Linux Foundation. All rights reserved.
/* Copyright (c) 2010-2015,2019 The Linux Foundation. All rights reserved.
*/
#ifndef __QCOM_SCM_INT_H
#define __QCOM_SCM_INT_H
#define QCOM_SCM_SVC_BOOT 0x1
#define QCOM_SCM_BOOT_ADDR 0x1
#define QCOM_SCM_SET_DLOAD_MODE 0x10
#define QCOM_SCM_BOOT_ADDR_MC 0x11
#define QCOM_SCM_SET_REMOTE_STATE 0xa
extern int __qcom_scm_set_remote_state(struct device *dev, u32 state, u32 id);
extern int __qcom_scm_set_dload_mode(struct device *dev, bool enable);
enum qcom_scm_convention {
SMC_CONVENTION_UNKNOWN,
SMC_CONVENTION_LEGACY,
SMC_CONVENTION_ARM_32,
SMC_CONVENTION_ARM_64,
};
#define QCOM_SCM_FLAG_HLOS 0x01
#define QCOM_SCM_FLAG_COLDBOOT_MC 0x02
#define QCOM_SCM_FLAG_WARMBOOT_MC 0x04
extern int __qcom_scm_set_warm_boot_addr(struct device *dev, void *entry,
const cpumask_t *cpus);
extern int __qcom_scm_set_cold_boot_addr(void *entry, const cpumask_t *cpus);
extern enum qcom_scm_convention qcom_scm_convention;
#define QCOM_SCM_CMD_TERMINATE_PC 0x2
#define MAX_QCOM_SCM_ARGS 10
#define MAX_QCOM_SCM_RETS 3
enum qcom_scm_arg_types {
QCOM_SCM_VAL,
QCOM_SCM_RO,
QCOM_SCM_RW,
QCOM_SCM_BUFVAL,
};
#define QCOM_SCM_ARGS_IMPL(num, a, b, c, d, e, f, g, h, i, j, ...) (\
(((a) & 0x3) << 4) | \
(((b) & 0x3) << 6) | \
(((c) & 0x3) << 8) | \
(((d) & 0x3) << 10) | \
(((e) & 0x3) << 12) | \
(((f) & 0x3) << 14) | \
(((g) & 0x3) << 16) | \
(((h) & 0x3) << 18) | \
(((i) & 0x3) << 20) | \
(((j) & 0x3) << 22) | \
((num) & 0xf))
#define QCOM_SCM_ARGS(...) QCOM_SCM_ARGS_IMPL(__VA_ARGS__, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0)
/**
* struct qcom_scm_desc
* @arginfo: Metadata describing the arguments in args[]
* @args: The array of arguments for the secure syscall
*/
struct qcom_scm_desc {
u32 svc;
u32 cmd;
u32 arginfo;
u64 args[MAX_QCOM_SCM_ARGS];
u32 owner;
};
/**
* struct qcom_scm_res
* @result: The values returned by the secure syscall
*/
struct qcom_scm_res {
u64 result[MAX_QCOM_SCM_RETS];
};
#define SCM_SMC_FNID(s, c) ((((s) & 0xFF) << 8) | ((c) & 0xFF))
extern int scm_smc_call(struct device *dev, const struct qcom_scm_desc *desc,
struct qcom_scm_res *res, bool atomic);
#define SCM_LEGACY_FNID(s, c) (((s) << 10) | ((c) & 0x3ff))
extern int scm_legacy_call_atomic(struct device *dev,
const struct qcom_scm_desc *desc,
struct qcom_scm_res *res);
extern int scm_legacy_call(struct device *dev, const struct qcom_scm_desc *desc,
struct qcom_scm_res *res);
#define QCOM_SCM_SVC_BOOT 0x01
#define QCOM_SCM_BOOT_SET_ADDR 0x01
#define QCOM_SCM_BOOT_TERMINATE_PC 0x02
#define QCOM_SCM_BOOT_SET_DLOAD_MODE 0x10
#define QCOM_SCM_BOOT_SET_REMOTE_STATE 0x0a
#define QCOM_SCM_FLUSH_FLAG_MASK 0x3
#define QCOM_SCM_CMD_CORE_HOTPLUGGED 0x10
extern void __qcom_scm_cpu_power_down(u32 flags);
#define QCOM_SCM_SVC_IO 0x5
#define QCOM_SCM_IO_READ 0x1
#define QCOM_SCM_IO_WRITE 0x2
extern int __qcom_scm_io_readl(struct device *dev, phys_addr_t addr, unsigned int *val);
extern int __qcom_scm_io_writel(struct device *dev, phys_addr_t addr, unsigned int val);
#define QCOM_SCM_SVC_PIL 0x02
#define QCOM_SCM_PIL_PAS_INIT_IMAGE 0x01
#define QCOM_SCM_PIL_PAS_MEM_SETUP 0x02
#define QCOM_SCM_PIL_PAS_AUTH_AND_RESET 0x05
#define QCOM_SCM_PIL_PAS_SHUTDOWN 0x06
#define QCOM_SCM_PIL_PAS_IS_SUPPORTED 0x07
#define QCOM_SCM_PIL_PAS_MSS_RESET 0x0a
#define QCOM_SCM_SVC_INFO 0x6
#define QCOM_IS_CALL_AVAIL_CMD 0x1
extern int __qcom_scm_is_call_available(struct device *dev, u32 svc_id,
u32 cmd_id);
#define QCOM_SCM_SVC_IO 0x05
#define QCOM_SCM_IO_READ 0x01
#define QCOM_SCM_IO_WRITE 0x02
#define QCOM_SCM_SVC_INFO 0x06
#define QCOM_SCM_INFO_IS_CALL_AVAIL 0x01
#define QCOM_SCM_SVC_MP 0x0c
#define QCOM_SCM_MP_RESTORE_SEC_CFG 0x02
#define QCOM_SCM_MP_IOMMU_SECURE_PTBL_SIZE 0x03
#define QCOM_SCM_MP_IOMMU_SECURE_PTBL_INIT 0x04
#define QCOM_SCM_MP_ASSIGN 0x16
#define QCOM_SCM_SVC_OCMEM 0x0f
#define QCOM_SCM_OCMEM_LOCK_CMD 0x01
#define QCOM_SCM_OCMEM_UNLOCK_CMD 0x02
#define QCOM_SCM_SVC_HDCP 0x11
#define QCOM_SCM_CMD_HDCP 0x01
extern int __qcom_scm_hdcp_req(struct device *dev,
struct qcom_scm_hdcp_req *req, u32 req_cnt, u32 *resp);
#define QCOM_SCM_HDCP_INVOKE 0x01
#define QCOM_SCM_SVC_SMMU_PROGRAM 0x15
#define QCOM_SCM_SMMU_CONFIG_ERRATA1 0x03
#define QCOM_SCM_SMMU_CONFIG_ERRATA1_CLIENT_ALL 0x02
extern void __qcom_scm_init(void);
#define QCOM_SCM_OCMEM_SVC 0xf
#define QCOM_SCM_OCMEM_LOCK_CMD 0x1
#define QCOM_SCM_OCMEM_UNLOCK_CMD 0x2
extern int __qcom_scm_ocmem_lock(struct device *dev, u32 id, u32 offset,
u32 size, u32 mode);
extern int __qcom_scm_ocmem_unlock(struct device *dev, u32 id, u32 offset,
u32 size);
#define QCOM_SCM_SVC_PIL 0x2
#define QCOM_SCM_PAS_INIT_IMAGE_CMD 0x1
#define QCOM_SCM_PAS_MEM_SETUP_CMD 0x2
#define QCOM_SCM_PAS_AUTH_AND_RESET_CMD 0x5
#define QCOM_SCM_PAS_SHUTDOWN_CMD 0x6
#define QCOM_SCM_PAS_IS_SUPPORTED_CMD 0x7
#define QCOM_SCM_PAS_MSS_RESET 0xa
extern bool __qcom_scm_pas_supported(struct device *dev, u32 peripheral);
extern int __qcom_scm_pas_init_image(struct device *dev, u32 peripheral,
dma_addr_t metadata_phys);
extern int __qcom_scm_pas_mem_setup(struct device *dev, u32 peripheral,
phys_addr_t addr, phys_addr_t size);
extern int __qcom_scm_pas_auth_and_reset(struct device *dev, u32 peripheral);
extern int __qcom_scm_pas_shutdown(struct device *dev, u32 peripheral);
extern int __qcom_scm_pas_mss_reset(struct device *dev, bool reset);
/* common error codes */
#define QCOM_SCM_V2_EBUSY -12
#define QCOM_SCM_ENOMEM -5
@ -94,25 +139,4 @@ static inline int qcom_scm_remap_error(int err)
return -EINVAL;
}
#define QCOM_SCM_SVC_MP 0xc
#define QCOM_SCM_RESTORE_SEC_CFG 2
extern int __qcom_scm_restore_sec_cfg(struct device *dev, u32 device_id,
u32 spare);
#define QCOM_SCM_IOMMU_SECURE_PTBL_SIZE 3
#define QCOM_SCM_IOMMU_SECURE_PTBL_INIT 4
#define QCOM_SCM_SVC_SMMU_PROGRAM 0x15
#define QCOM_SCM_CONFIG_ERRATA1 0x3
#define QCOM_SCM_CONFIG_ERRATA1_CLIENT_ALL 0x2
extern int __qcom_scm_iommu_secure_ptbl_size(struct device *dev, u32 spare,
size_t *size);
extern int __qcom_scm_iommu_secure_ptbl_init(struct device *dev, u64 addr,
u32 size, u32 spare);
extern int __qcom_scm_qsmmu500_wait_safe_toggle(struct device *dev,
bool enable);
#define QCOM_MEM_PROT_ASSIGN_ID 0x16
extern int __qcom_scm_assign_mem(struct device *dev,
phys_addr_t mem_region, size_t mem_sz,
phys_addr_t src, size_t src_sz,
phys_addr_t dest, size_t dest_sz);
#endif

Просмотреть файл

@ -197,7 +197,7 @@ static int mox_get_board_info(struct mox_rwtm *rwtm)
rwtm->serial_number = reply->status[1];
rwtm->serial_number <<= 32;
rwtm->serial_number |= reply->status[0];
rwtm->board_version = reply->status[2];
rwtm->board_version = reply->status[2];
rwtm->ram_size = reply->status[3];
reply_to_mac_addr(rwtm->mac_address1, reply->status[4],
reply->status[5]);

Просмотреть файл

@ -26,6 +26,9 @@
static const struct zynqmp_eemi_ops *eemi_ops_tbl;
static bool feature_check_enabled;
static u32 zynqmp_pm_features[PM_API_MAX];
static const struct mfd_cell firmware_devs[] = {
{
.name = "zynqmp_power_controller",
@ -44,6 +47,8 @@ static int zynqmp_pm_ret_code(u32 ret_status)
case XST_PM_SUCCESS:
case XST_PM_DOUBLE_REQ:
return 0;
case XST_PM_NO_FEATURE:
return -ENOTSUPP;
case XST_PM_NO_ACCESS:
return -EACCES;
case XST_PM_ABORT_SUSPEND:
@ -128,6 +133,39 @@ static noinline int do_fw_call_hvc(u64 arg0, u64 arg1, u64 arg2,
return zynqmp_pm_ret_code((enum pm_ret_status)res.a0);
}
/**
* zynqmp_pm_feature() - Check weather given feature is supported or not
* @api_id: API ID to check
*
* Return: Returns status, either success or error+reason
*/
static int zynqmp_pm_feature(u32 api_id)
{
int ret;
u32 ret_payload[PAYLOAD_ARG_CNT];
u64 smc_arg[2];
if (!feature_check_enabled)
return 0;
/* Return value if feature is already checked */
if (zynqmp_pm_features[api_id] != PM_FEATURE_UNCHECKED)
return zynqmp_pm_features[api_id];
smc_arg[0] = PM_SIP_SVC | PM_FEATURE_CHECK;
smc_arg[1] = api_id;
ret = do_fw_call(smc_arg[0], smc_arg[1], 0, ret_payload);
if (ret) {
zynqmp_pm_features[api_id] = PM_FEATURE_INVALID;
return PM_FEATURE_INVALID;
}
zynqmp_pm_features[api_id] = ret_payload[1];
return zynqmp_pm_features[api_id];
}
/**
* zynqmp_pm_invoke_fn() - Invoke the system-level platform management layer
* caller function depending on the configuration
@ -162,6 +200,9 @@ int zynqmp_pm_invoke_fn(u32 pm_api_id, u32 arg0, u32 arg1,
*/
u64 smc_arg[4];
if (zynqmp_pm_feature(pm_api_id) == PM_FEATURE_INVALID)
return -ENOTSUPP;
smc_arg[0] = PM_SIP_SVC | pm_api_id;
smc_arg[1] = ((u64)arg1 << 32) | arg0;
smc_arg[2] = ((u64)arg3 << 32) | arg2;
@ -717,6 +758,8 @@ static int zynqmp_firmware_probe(struct platform_device *pdev)
np = of_find_compatible_node(NULL, NULL, "xlnx,versal");
if (!np)
return 0;
feature_check_enabled = true;
}
of_node_put(np);

Просмотреть файл

@ -259,7 +259,7 @@ static int scmi_hwmon_probe(struct scmi_device *sdev)
}
static const struct scmi_device_id scmi_id_table[] = {
{ SCMI_PROTOCOL_SENSOR },
{ SCMI_PROTOCOL_SENSOR, "hwmon" },
{ },
};
MODULE_DEVICE_TABLE(scmi, scmi_id_table);

Просмотреть файл

@ -143,7 +143,6 @@ static const struct mbox_chan_ops a37xx_mbox_ops = {
static int armada_37xx_mbox_probe(struct platform_device *pdev)
{
struct a37xx_mbox *mbox;
struct resource *regs;
struct mbox_chan *chans;
int ret;
@ -156,9 +155,7 @@ static int armada_37xx_mbox_probe(struct platform_device *pdev)
if (!chans)
return -ENOMEM;
regs = platform_get_resource(pdev, IORESOURCE_MEM, 0);
mbox->base = devm_ioremap_resource(&pdev->dev, regs);
mbox->base = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(mbox->base)) {
dev_err(&pdev->dev, "ioremap failed\n");
return PTR_ERR(mbox->base);

Просмотреть файл

@ -267,7 +267,6 @@ static int mvebu_devbus_probe(struct platform_device *pdev)
struct devbus_read_params r;
struct devbus_write_params w;
struct devbus *devbus;
struct resource *res;
struct clk *clk;
unsigned long rate;
int err;
@ -277,8 +276,7 @@ static int mvebu_devbus_probe(struct platform_device *pdev)
return -ENOMEM;
devbus->dev = dev;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
devbus->base = devm_ioremap_resource(&pdev->dev, res);
devbus->base = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(devbus->base))
return PTR_ERR(devbus->base);

Просмотреть файл

@ -8,7 +8,7 @@ config SAMSUNG_MC
if SAMSUNG_MC
config EXYNOS5422_DMC
tristate "EXYNOS5422 Dynamic Memory Controller driver"
tristate "Exynos5422 Dynamic Memory Controller driver"
depends on ARCH_EXYNOS || (COMPILE_TEST && HAS_IOMEM)
select DDR
depends on DEVFREQ_GOV_SIMPLE_ONDEMAND

Просмотреть файл

@ -3,7 +3,7 @@
// Copyright (c) 2015 Samsung Electronics Co., Ltd.
// http://www.samsung.com/
//
// EXYNOS - SROM Controller support
// Exynos - SROM Controller support
// Author: Pankaj Dubey <pankaj.dubey@samsung.com>
#include <linux/io.h>

Просмотреть файл

@ -1374,7 +1374,6 @@ static int exynos5_dmc_probe(struct platform_device *pdev)
struct device *dev = &pdev->dev;
struct device_node *np = dev->of_node;
struct exynos5_dmc *dmc;
struct resource *res;
int irq[2];
dmc = devm_kzalloc(dev, sizeof(*dmc), GFP_KERNEL);
@ -1386,13 +1385,11 @@ static int exynos5_dmc_probe(struct platform_device *pdev)
dmc->dev = dev;
platform_set_drvdata(pdev, dmc);
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
dmc->base_drexi0 = devm_ioremap_resource(dev, res);
dmc->base_drexi0 = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(dmc->base_drexi0))
return PTR_ERR(dmc->base_drexi0);
res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
dmc->base_drexi1 = devm_ioremap_resource(dev, res);
dmc->base_drexi1 = devm_platform_ioremap_resource(pdev, 1);
if (IS_ERR(dmc->base_drexi1))
return PTR_ERR(dmc->base_drexi1);

Просмотреть файл

@ -13,4 +13,5 @@ obj-$(CONFIG_TEGRA_MC) += tegra-mc.o
obj-$(CONFIG_TEGRA20_EMC) += tegra20-emc.o
obj-$(CONFIG_TEGRA30_EMC) += tegra30-emc.o
obj-$(CONFIG_TEGRA124_EMC) += tegra124-emc.o
obj-$(CONFIG_ARCH_TEGRA_186_SOC) += tegra186.o
obj-$(CONFIG_ARCH_TEGRA_186_SOC) += tegra186.o tegra186-emc.o
obj-$(CONFIG_ARCH_TEGRA_194_SOC) += tegra186.o tegra186-emc.o

Просмотреть файл

@ -467,12 +467,20 @@ struct tegra_emc {
void __iomem *regs;
struct clk *clk;
enum emc_dram_type dram_type;
unsigned int dram_num;
struct emc_timing last_timing;
struct emc_timing *timings;
unsigned int num_timings;
struct {
struct dentry *root;
unsigned long min_rate;
unsigned long max_rate;
} debugfs;
};
/* Timing change sequence functions */
@ -998,38 +1006,51 @@ tegra_emc_find_node_by_ram_code(struct device_node *node, u32 ram_code)
return NULL;
}
/* Debugfs entry */
/*
* debugfs interface
*
* The memory controller driver exposes some files in debugfs that can be used
* to control the EMC frequency. The top-level directory can be found here:
*
* /sys/kernel/debug/emc
*
* It contains the following files:
*
* - available_rates: This file contains a list of valid, space-separated
* EMC frequencies.
*
* - min_rate: Writing a value to this file sets the given frequency as the
* floor of the permitted range. If this is higher than the currently
* configured EMC frequency, this will cause the frequency to be
* increased so that it stays within the valid range.
*
* - max_rate: Similarily to the min_rate file, writing a value to this file
* sets the given frequency as the ceiling of the permitted range. If
* the value is lower than the currently configured EMC frequency, this
* will cause the frequency to be decreased so that it stays within the
* valid range.
*/
static int emc_debug_rate_get(void *data, u64 *rate)
static bool tegra_emc_validate_rate(struct tegra_emc *emc, unsigned long rate)
{
struct clk *c = data;
unsigned int i;
*rate = clk_get_rate(c);
for (i = 0; i < emc->num_timings; i++)
if (rate == emc->timings[i].rate)
return true;
return 0;
return false;
}
static int emc_debug_rate_set(void *data, u64 rate)
{
struct clk *c = data;
return clk_set_rate(c, rate);
}
DEFINE_SIMPLE_ATTRIBUTE(emc_debug_rate_fops, emc_debug_rate_get,
emc_debug_rate_set, "%lld\n");
static int emc_debug_supported_rates_show(struct seq_file *s, void *data)
static int tegra_emc_debug_available_rates_show(struct seq_file *s,
void *data)
{
struct tegra_emc *emc = s->private;
const char *prefix = "";
unsigned int i;
for (i = 0; i < emc->num_timings; i++) {
struct emc_timing *timing = &emc->timings[i];
seq_printf(s, "%s%lu", prefix, timing->rate);
seq_printf(s, "%s%lu", prefix, emc->timings[i].rate);
prefix = " ";
}
@ -1038,46 +1059,126 @@ static int emc_debug_supported_rates_show(struct seq_file *s, void *data)
return 0;
}
static int emc_debug_supported_rates_open(struct inode *inode,
struct file *file)
static int tegra_emc_debug_available_rates_open(struct inode *inode,
struct file *file)
{
return single_open(file, emc_debug_supported_rates_show,
return single_open(file, tegra_emc_debug_available_rates_show,
inode->i_private);
}
static const struct file_operations emc_debug_supported_rates_fops = {
.open = emc_debug_supported_rates_open,
static const struct file_operations tegra_emc_debug_available_rates_fops = {
.open = tegra_emc_debug_available_rates_open,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
static int tegra_emc_debug_min_rate_get(void *data, u64 *rate)
{
struct tegra_emc *emc = data;
*rate = emc->debugfs.min_rate;
return 0;
}
static int tegra_emc_debug_min_rate_set(void *data, u64 rate)
{
struct tegra_emc *emc = data;
int err;
if (!tegra_emc_validate_rate(emc, rate))
return -EINVAL;
err = clk_set_min_rate(emc->clk, rate);
if (err < 0)
return err;
emc->debugfs.min_rate = rate;
return 0;
}
DEFINE_SIMPLE_ATTRIBUTE(tegra_emc_debug_min_rate_fops,
tegra_emc_debug_min_rate_get,
tegra_emc_debug_min_rate_set, "%llu\n");
static int tegra_emc_debug_max_rate_get(void *data, u64 *rate)
{
struct tegra_emc *emc = data;
*rate = emc->debugfs.max_rate;
return 0;
}
static int tegra_emc_debug_max_rate_set(void *data, u64 rate)
{
struct tegra_emc *emc = data;
int err;
if (!tegra_emc_validate_rate(emc, rate))
return -EINVAL;
err = clk_set_max_rate(emc->clk, rate);
if (err < 0)
return err;
emc->debugfs.max_rate = rate;
return 0;
}
DEFINE_SIMPLE_ATTRIBUTE(tegra_emc_debug_max_rate_fops,
tegra_emc_debug_max_rate_get,
tegra_emc_debug_max_rate_set, "%llu\n");
static void emc_debugfs_init(struct device *dev, struct tegra_emc *emc)
{
struct dentry *root, *file;
struct clk *clk;
unsigned int i;
int err;
root = debugfs_create_dir("emc", NULL);
if (!root) {
emc->clk = devm_clk_get(dev, "emc");
if (IS_ERR(emc->clk)) {
if (PTR_ERR(emc->clk) != -ENODEV) {
dev_err(dev, "failed to get EMC clock: %ld\n",
PTR_ERR(emc->clk));
return;
}
}
emc->debugfs.min_rate = ULONG_MAX;
emc->debugfs.max_rate = 0;
for (i = 0; i < emc->num_timings; i++) {
if (emc->timings[i].rate < emc->debugfs.min_rate)
emc->debugfs.min_rate = emc->timings[i].rate;
if (emc->timings[i].rate > emc->debugfs.max_rate)
emc->debugfs.max_rate = emc->timings[i].rate;
}
err = clk_set_rate_range(emc->clk, emc->debugfs.min_rate,
emc->debugfs.max_rate);
if (err < 0) {
dev_err(dev, "failed to set rate range [%lu-%lu] for %pC\n",
emc->debugfs.min_rate, emc->debugfs.max_rate,
emc->clk);
return;
}
emc->debugfs.root = debugfs_create_dir("emc", NULL);
if (!emc->debugfs.root) {
dev_err(dev, "failed to create debugfs directory\n");
return;
}
clk = clk_get_sys("tegra-clk-debug", "emc");
if (IS_ERR(clk)) {
dev_err(dev, "failed to get debug clock: %ld\n", PTR_ERR(clk));
return;
}
file = debugfs_create_file("rate", S_IRUGO | S_IWUSR, root, clk,
&emc_debug_rate_fops);
if (!file)
dev_err(dev, "failed to create debugfs entry\n");
file = debugfs_create_file("supported_rates", S_IRUGO, root, emc,
&emc_debug_supported_rates_fops);
if (!file)
dev_err(dev, "failed to create debugfs entry\n");
debugfs_create_file("available_rates", S_IRUGO, emc->debugfs.root, emc,
&tegra_emc_debug_available_rates_fops);
debugfs_create_file("min_rate", S_IRUGO | S_IWUSR, emc->debugfs.root,
emc, &tegra_emc_debug_min_rate_fops);
debugfs_create_file("max_rate", S_IRUGO | S_IWUSR, emc->debugfs.root,
emc, &tegra_emc_debug_max_rate_fops);
}
static int tegra_emc_probe(struct platform_device *pdev)

Просмотреть файл

@ -0,0 +1,293 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (C) 2019 NVIDIA CORPORATION. All rights reserved.
*/
#include <linux/clk.h>
#include <linux/debugfs.h>
#include <linux/module.h>
#include <linux/mod_devicetable.h>
#include <linux/platform_device.h>
#include <soc/tegra/bpmp.h>
struct tegra186_emc_dvfs {
unsigned long latency;
unsigned long rate;
};
struct tegra186_emc {
struct tegra_bpmp *bpmp;
struct device *dev;
struct clk *clk;
struct tegra186_emc_dvfs *dvfs;
unsigned int num_dvfs;
struct {
struct dentry *root;
unsigned long min_rate;
unsigned long max_rate;
} debugfs;
};
/*
* debugfs interface
*
* The memory controller driver exposes some files in debugfs that can be used
* to control the EMC frequency. The top-level directory can be found here:
*
* /sys/kernel/debug/emc
*
* It contains the following files:
*
* - available_rates: This file contains a list of valid, space-separated
* EMC frequencies.
*
* - min_rate: Writing a value to this file sets the given frequency as the
* floor of the permitted range. If this is higher than the currently
* configured EMC frequency, this will cause the frequency to be
* increased so that it stays within the valid range.
*
* - max_rate: Similarily to the min_rate file, writing a value to this file
* sets the given frequency as the ceiling of the permitted range. If
* the value is lower than the currently configured EMC frequency, this
* will cause the frequency to be decreased so that it stays within the
* valid range.
*/
static bool tegra186_emc_validate_rate(struct tegra186_emc *emc,
unsigned long rate)
{
unsigned int i;
for (i = 0; i < emc->num_dvfs; i++)
if (rate == emc->dvfs[i].rate)
return true;
return false;
}
static int tegra186_emc_debug_available_rates_show(struct seq_file *s,
void *data)
{
struct tegra186_emc *emc = s->private;
const char *prefix = "";
unsigned int i;
for (i = 0; i < emc->num_dvfs; i++) {
seq_printf(s, "%s%lu", prefix, emc->dvfs[i].rate);
prefix = " ";
}
seq_puts(s, "\n");
return 0;
}
static int tegra186_emc_debug_available_rates_open(struct inode *inode,
struct file *file)
{
return single_open(file, tegra186_emc_debug_available_rates_show,
inode->i_private);
}
static const struct file_operations tegra186_emc_debug_available_rates_fops = {
.open = tegra186_emc_debug_available_rates_open,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
static int tegra186_emc_debug_min_rate_get(void *data, u64 *rate)
{
struct tegra186_emc *emc = data;
*rate = emc->debugfs.min_rate;
return 0;
}
static int tegra186_emc_debug_min_rate_set(void *data, u64 rate)
{
struct tegra186_emc *emc = data;
int err;
if (!tegra186_emc_validate_rate(emc, rate))
return -EINVAL;
err = clk_set_min_rate(emc->clk, rate);
if (err < 0)
return err;
emc->debugfs.min_rate = rate;
return 0;
}
DEFINE_SIMPLE_ATTRIBUTE(tegra186_emc_debug_min_rate_fops,
tegra186_emc_debug_min_rate_get,
tegra186_emc_debug_min_rate_set, "%llu\n");
static int tegra186_emc_debug_max_rate_get(void *data, u64 *rate)
{
struct tegra186_emc *emc = data;
*rate = emc->debugfs.max_rate;
return 0;
}
static int tegra186_emc_debug_max_rate_set(void *data, u64 rate)
{
struct tegra186_emc *emc = data;
int err;
if (!tegra186_emc_validate_rate(emc, rate))
return -EINVAL;
err = clk_set_max_rate(emc->clk, rate);
if (err < 0)
return err;
emc->debugfs.max_rate = rate;
return 0;
}
DEFINE_SIMPLE_ATTRIBUTE(tegra186_emc_debug_max_rate_fops,
tegra186_emc_debug_max_rate_get,
tegra186_emc_debug_max_rate_set, "%llu\n");
static int tegra186_emc_probe(struct platform_device *pdev)
{
struct mrq_emc_dvfs_latency_response response;
struct tegra_bpmp_message msg;
struct tegra186_emc *emc;
unsigned int i;
int err;
emc = devm_kzalloc(&pdev->dev, sizeof(*emc), GFP_KERNEL);
if (!emc)
return -ENOMEM;
emc->bpmp = tegra_bpmp_get(&pdev->dev);
if (IS_ERR(emc->bpmp)) {
err = PTR_ERR(emc->bpmp);
if (err != -EPROBE_DEFER)
dev_err(&pdev->dev, "failed to get BPMP: %d\n", err);
return err;
}
emc->clk = devm_clk_get(&pdev->dev, "emc");
if (IS_ERR(emc->clk)) {
err = PTR_ERR(emc->clk);
dev_err(&pdev->dev, "failed to get EMC clock: %d\n", err);
return err;
}
platform_set_drvdata(pdev, emc);
emc->dev = &pdev->dev;
memset(&msg, 0, sizeof(msg));
msg.mrq = MRQ_EMC_DVFS_LATENCY;
msg.tx.data = NULL;
msg.tx.size = 0;
msg.rx.data = &response;
msg.rx.size = sizeof(response);
err = tegra_bpmp_transfer(emc->bpmp, &msg);
if (err < 0) {
dev_err(&pdev->dev, "failed to EMC DVFS pairs: %d\n", err);
return err;
}
emc->debugfs.min_rate = ULONG_MAX;
emc->debugfs.max_rate = 0;
emc->num_dvfs = response.num_pairs;
emc->dvfs = devm_kmalloc_array(&pdev->dev, emc->num_dvfs,
sizeof(*emc->dvfs), GFP_KERNEL);
if (!emc->dvfs)
return -ENOMEM;
dev_dbg(&pdev->dev, "%u DVFS pairs:\n", emc->num_dvfs);
for (i = 0; i < emc->num_dvfs; i++) {
emc->dvfs[i].rate = response.pairs[i].freq * 1000;
emc->dvfs[i].latency = response.pairs[i].latency;
if (emc->dvfs[i].rate < emc->debugfs.min_rate)
emc->debugfs.min_rate = emc->dvfs[i].rate;
if (emc->dvfs[i].rate > emc->debugfs.max_rate)
emc->debugfs.max_rate = emc->dvfs[i].rate;
dev_dbg(&pdev->dev, " %2u: %lu Hz -> %lu us\n", i,
emc->dvfs[i].rate, emc->dvfs[i].latency);
}
err = clk_set_rate_range(emc->clk, emc->debugfs.min_rate,
emc->debugfs.max_rate);
if (err < 0) {
dev_err(&pdev->dev,
"failed to set rate range [%lu-%lu] for %pC\n",
emc->debugfs.min_rate, emc->debugfs.max_rate,
emc->clk);
return err;
}
emc->debugfs.root = debugfs_create_dir("emc", NULL);
if (!emc->debugfs.root) {
dev_err(&pdev->dev, "failed to create debugfs directory\n");
return 0;
}
debugfs_create_file("available_rates", S_IRUGO, emc->debugfs.root,
emc, &tegra186_emc_debug_available_rates_fops);
debugfs_create_file("min_rate", S_IRUGO | S_IWUSR, emc->debugfs.root,
emc, &tegra186_emc_debug_min_rate_fops);
debugfs_create_file("max_rate", S_IRUGO | S_IWUSR, emc->debugfs.root,
emc, &tegra186_emc_debug_max_rate_fops);
return 0;
}
static int tegra186_emc_remove(struct platform_device *pdev)
{
struct tegra186_emc *emc = platform_get_drvdata(pdev);
debugfs_remove_recursive(emc->debugfs.root);
tegra_bpmp_put(emc->bpmp);
return 0;
}
static const struct of_device_id tegra186_emc_of_match[] = {
#if defined(CONFIG_ARCH_TEGRA186_SOC)
{ .compatible = "nvidia,tegra186-emc" },
#endif
#if defined(CONFIG_ARCH_TEGRA194_SOC)
{ .compatible = "nvidia,tegra194-emc" },
#endif
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(of, tegra186_emc_of_match);
static struct platform_driver tegra186_emc_driver = {
.driver = {
.name = "tegra186-emc",
.of_match_table = tegra186_emc_of_match,
.suppress_bind_attrs = true,
},
.probe = tegra186_emc_probe,
.remove = tegra186_emc_remove,
};
module_platform_driver(tegra186_emc_driver);
MODULE_AUTHOR("Thierry Reding <treding@nvidia.com>");
MODULE_DESCRIPTION("NVIDIA Tegra186 External Memory Controller driver");
MODULE_LICENSE("GPL v2");

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -8,6 +8,7 @@
#include <linux/clk.h>
#include <linux/clk/tegra.h>
#include <linux/completion.h>
#include <linux/debugfs.h>
#include <linux/err.h>
#include <linux/interrupt.h>
#include <linux/io.h>
@ -150,6 +151,12 @@ struct tegra_emc {
struct emc_timing *timings;
unsigned int num_timings;
struct {
struct dentry *root;
unsigned long min_rate;
unsigned long max_rate;
} debugfs;
};
static irqreturn_t tegra_emc_isr(int irq, void *data)
@ -478,6 +485,171 @@ static long emc_round_rate(unsigned long rate,
return timing->rate;
}
/*
* debugfs interface
*
* The memory controller driver exposes some files in debugfs that can be used
* to control the EMC frequency. The top-level directory can be found here:
*
* /sys/kernel/debug/emc
*
* It contains the following files:
*
* - available_rates: This file contains a list of valid, space-separated
* EMC frequencies.
*
* - min_rate: Writing a value to this file sets the given frequency as the
* floor of the permitted range. If this is higher than the currently
* configured EMC frequency, this will cause the frequency to be
* increased so that it stays within the valid range.
*
* - max_rate: Similarily to the min_rate file, writing a value to this file
* sets the given frequency as the ceiling of the permitted range. If
* the value is lower than the currently configured EMC frequency, this
* will cause the frequency to be decreased so that it stays within the
* valid range.
*/
static bool tegra_emc_validate_rate(struct tegra_emc *emc, unsigned long rate)
{
unsigned int i;
for (i = 0; i < emc->num_timings; i++)
if (rate == emc->timings[i].rate)
return true;
return false;
}
static int tegra_emc_debug_available_rates_show(struct seq_file *s, void *data)
{
struct tegra_emc *emc = s->private;
const char *prefix = "";
unsigned int i;
for (i = 0; i < emc->num_timings; i++) {
seq_printf(s, "%s%lu", prefix, emc->timings[i].rate);
prefix = " ";
}
seq_puts(s, "\n");
return 0;
}
static int tegra_emc_debug_available_rates_open(struct inode *inode,
struct file *file)
{
return single_open(file, tegra_emc_debug_available_rates_show,
inode->i_private);
}
static const struct file_operations tegra_emc_debug_available_rates_fops = {
.open = tegra_emc_debug_available_rates_open,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
static int tegra_emc_debug_min_rate_get(void *data, u64 *rate)
{
struct tegra_emc *emc = data;
*rate = emc->debugfs.min_rate;
return 0;
}
static int tegra_emc_debug_min_rate_set(void *data, u64 rate)
{
struct tegra_emc *emc = data;
int err;
if (!tegra_emc_validate_rate(emc, rate))
return -EINVAL;
err = clk_set_min_rate(emc->clk, rate);
if (err < 0)
return err;
emc->debugfs.min_rate = rate;
return 0;
}
DEFINE_SIMPLE_ATTRIBUTE(tegra_emc_debug_min_rate_fops,
tegra_emc_debug_min_rate_get,
tegra_emc_debug_min_rate_set, "%llu\n");
static int tegra_emc_debug_max_rate_get(void *data, u64 *rate)
{
struct tegra_emc *emc = data;
*rate = emc->debugfs.max_rate;
return 0;
}
static int tegra_emc_debug_max_rate_set(void *data, u64 rate)
{
struct tegra_emc *emc = data;
int err;
if (!tegra_emc_validate_rate(emc, rate))
return -EINVAL;
err = clk_set_max_rate(emc->clk, rate);
if (err < 0)
return err;
emc->debugfs.max_rate = rate;
return 0;
}
DEFINE_SIMPLE_ATTRIBUTE(tegra_emc_debug_max_rate_fops,
tegra_emc_debug_max_rate_get,
tegra_emc_debug_max_rate_set, "%llu\n");
static void tegra_emc_debugfs_init(struct tegra_emc *emc)
{
struct device *dev = emc->dev;
unsigned int i;
int err;
emc->debugfs.min_rate = ULONG_MAX;
emc->debugfs.max_rate = 0;
for (i = 0; i < emc->num_timings; i++) {
if (emc->timings[i].rate < emc->debugfs.min_rate)
emc->debugfs.min_rate = emc->timings[i].rate;
if (emc->timings[i].rate > emc->debugfs.max_rate)
emc->debugfs.max_rate = emc->timings[i].rate;
}
err = clk_set_rate_range(emc->clk, emc->debugfs.min_rate,
emc->debugfs.max_rate);
if (err < 0) {
dev_err(dev, "failed to set rate range [%lu-%lu] for %pC\n",
emc->debugfs.min_rate, emc->debugfs.max_rate,
emc->clk);
}
emc->debugfs.root = debugfs_create_dir("emc", NULL);
if (!emc->debugfs.root) {
dev_err(emc->dev, "failed to create debugfs directory\n");
return;
}
debugfs_create_file("available_rates", S_IRUGO, emc->debugfs.root,
emc, &tegra_emc_debug_available_rates_fops);
debugfs_create_file("min_rate", S_IRUGO | S_IWUSR, emc->debugfs.root,
emc, &tegra_emc_debug_min_rate_fops);
debugfs_create_file("max_rate", S_IRUGO | S_IWUSR, emc->debugfs.root,
emc, &tegra_emc_debug_max_rate_fops);
}
static int tegra_emc_probe(struct platform_device *pdev)
{
struct device_node *np;
@ -550,6 +722,9 @@ static int tegra_emc_probe(struct platform_device *pdev)
goto unset_cb;
}
platform_set_drvdata(pdev, emc);
tegra_emc_debugfs_init(emc);
return 0;
unset_cb:

Просмотреть файл

@ -436,7 +436,7 @@ static const struct tegra_mc_client tegra210_mc_clients[] = {
.reg = 0x37c,
.shift = 0,
.mask = 0xff,
.def = 0x39,
.def = 0x7a,
},
}, {
.id = 0x4b,

Просмотреть файл

@ -12,6 +12,7 @@
#include <linux/clk.h>
#include <linux/clk/tegra.h>
#include <linux/completion.h>
#include <linux/debugfs.h>
#include <linux/delay.h>
#include <linux/err.h>
#include <linux/interrupt.h>
@ -331,7 +332,9 @@ struct tegra_emc {
struct clk *clk;
void __iomem *regs;
unsigned int irq;
bool bad_state;
struct emc_timing *new_timing;
struct emc_timing *timings;
unsigned int num_timings;
@ -345,10 +348,74 @@ struct tegra_emc {
bool vref_cal_toggle : 1;
bool zcal_long : 1;
bool dll_on : 1;
bool prepared : 1;
bool bad_state : 1;
struct {
struct dentry *root;
unsigned long min_rate;
unsigned long max_rate;
} debugfs;
};
static int emc_seq_update_timing(struct tegra_emc *emc)
{
u32 val;
int err;
writel_relaxed(EMC_TIMING_UPDATE, emc->regs + EMC_TIMING_CONTROL);
err = readl_relaxed_poll_timeout_atomic(emc->regs + EMC_STATUS, val,
!(val & EMC_STATUS_TIMING_UPDATE_STALLED),
1, 200);
if (err) {
dev_err(emc->dev, "failed to update timing: %d\n", err);
return err;
}
return 0;
}
static void emc_complete_clk_change(struct tegra_emc *emc)
{
struct emc_timing *timing = emc->new_timing;
unsigned int dram_num;
bool failed = false;
int err;
/* re-enable auto-refresh */
dram_num = tegra_mc_get_emem_device_count(emc->mc);
writel_relaxed(EMC_REFCTRL_ENABLE_ALL(dram_num),
emc->regs + EMC_REFCTRL);
/* restore auto-calibration */
if (emc->vref_cal_toggle)
writel_relaxed(timing->emc_auto_cal_interval,
emc->regs + EMC_AUTO_CAL_INTERVAL);
/* restore dynamic self-refresh */
if (timing->emc_cfg_dyn_self_ref) {
emc->emc_cfg |= EMC_CFG_DYN_SREF_ENABLE;
writel_relaxed(emc->emc_cfg, emc->regs + EMC_CFG);
}
/* set number of clocks to wait after each ZQ command */
if (emc->zcal_long)
writel_relaxed(timing->emc_zcal_cnt_long,
emc->regs + EMC_ZCAL_WAIT_CNT);
/* wait for writes to settle */
udelay(2);
/* update restored timing */
err = emc_seq_update_timing(emc);
if (err)
failed = true;
/* restore early ACK */
mc_writel(emc->mc, emc->mc_override, MC_EMEM_ARB_OVERRIDE);
WRITE_ONCE(emc->bad_state, failed);
}
static irqreturn_t tegra_emc_isr(int irq, void *data)
{
struct tegra_emc *emc = data;
@ -359,10 +426,6 @@ static irqreturn_t tegra_emc_isr(int irq, void *data)
if (!status)
return IRQ_NONE;
/* notify about EMC-CAR handshake completion */
if (status & EMC_CLKCHANGE_COMPLETE_INT)
complete(&emc->clk_handshake_complete);
/* notify about HW problem */
if (status & EMC_REFRESH_OVERFLOW_INT)
dev_err_ratelimited(emc->dev,
@ -371,6 +434,18 @@ static irqreturn_t tegra_emc_isr(int irq, void *data)
/* clear interrupts */
writel_relaxed(status, emc->regs + EMC_INTSTATUS);
/* notify about EMC-CAR handshake completion */
if (status & EMC_CLKCHANGE_COMPLETE_INT) {
if (completion_done(&emc->clk_handshake_complete)) {
dev_err_ratelimited(emc->dev,
"bogus handshake interrupt\n");
return IRQ_NONE;
}
emc_complete_clk_change(emc);
complete(&emc->clk_handshake_complete);
}
return IRQ_HANDLED;
}
@ -438,24 +513,6 @@ static bool emc_dqs_preset(struct tegra_emc *emc, struct emc_timing *timing,
return preset;
}
static int emc_seq_update_timing(struct tegra_emc *emc)
{
u32 val;
int err;
writel_relaxed(EMC_TIMING_UPDATE, emc->regs + EMC_TIMING_CONTROL);
err = readl_relaxed_poll_timeout_atomic(emc->regs + EMC_STATUS, val,
!(val & EMC_STATUS_TIMING_UPDATE_STALLED),
1, 200);
if (err) {
dev_err(emc->dev, "failed to update timing: %d\n", err);
return err;
}
return 0;
}
static int emc_prepare_mc_clk_cfg(struct tegra_emc *emc, unsigned long rate)
{
struct tegra_mc *mc = emc->mc;
@ -582,8 +639,7 @@ static int emc_prepare_timing_change(struct tegra_emc *emc, unsigned long rate)
!(val & EMC_AUTO_CAL_STATUS_ACTIVE), 1, 300);
if (err) {
dev_err(emc->dev,
"failed to disable auto-cal: %d\n",
err);
"auto-cal finish timeout: %d\n", err);
return err;
}
@ -621,9 +677,6 @@ static int emc_prepare_timing_change(struct tegra_emc *emc, unsigned long rate)
writel_relaxed(val, emc->regs + EMC_MRS_WAIT_CNT);
}
/* disable interrupt since read access is prohibited after stalling */
disable_irq(emc->irq);
/* this read also completes the writes */
val = readl_relaxed(emc->regs + EMC_SEL_DPD_CTRL);
@ -739,20 +792,18 @@ static int emc_prepare_timing_change(struct tegra_emc *emc, unsigned long rate)
emc->regs + EMC_ZQ_CAL);
}
/* re-enable auto-refresh */
writel_relaxed(EMC_REFCTRL_ENABLE_ALL(dram_num),
emc->regs + EMC_REFCTRL);
/* flow control marker 3 */
writel_relaxed(0x1, emc->regs + EMC_UNSTALL_RW_AFTER_CLKCHANGE);
/*
* Read and discard an arbitrary MC register (Note: EMC registers
* can't be used) to ensure the register writes are completed.
*/
mc_readl(emc->mc, MC_EMEM_ARB_OVERRIDE);
reinit_completion(&emc->clk_handshake_complete);
/* interrupt can be re-enabled now */
enable_irq(emc->irq);
emc->bad_state = false;
emc->prepared = true;
emc->new_timing = timing;
return 0;
}
@ -760,52 +811,25 @@ static int emc_prepare_timing_change(struct tegra_emc *emc, unsigned long rate)
static int emc_complete_timing_change(struct tegra_emc *emc,
unsigned long rate)
{
struct emc_timing *timing = emc_find_timing(emc, rate);
unsigned long timeout;
int ret;
timeout = wait_for_completion_timeout(&emc->clk_handshake_complete,
msecs_to_jiffies(100));
if (timeout == 0) {
dev_err(emc->dev, "emc-car handshake failed\n");
emc->bad_state = true;
return -EIO;
}
/* restore auto-calibration */
if (emc->vref_cal_toggle)
writel_relaxed(timing->emc_auto_cal_interval,
emc->regs + EMC_AUTO_CAL_INTERVAL);
if (READ_ONCE(emc->bad_state))
return -EIO;
/* restore dynamic self-refresh */
if (timing->emc_cfg_dyn_self_ref) {
emc->emc_cfg |= EMC_CFG_DYN_SREF_ENABLE;
writel_relaxed(emc->emc_cfg, emc->regs + EMC_CFG);
}
/* set number of clocks to wait after each ZQ command */
if (emc->zcal_long)
writel_relaxed(timing->emc_zcal_cnt_long,
emc->regs + EMC_ZCAL_WAIT_CNT);
udelay(2);
/* update restored timing */
ret = emc_seq_update_timing(emc);
if (ret)
emc->bad_state = true;
/* restore early ACK */
mc_writel(emc->mc, emc->mc_override, MC_EMEM_ARB_OVERRIDE);
emc->prepared = false;
return ret;
return 0;
}
static int emc_unprepare_timing_change(struct tegra_emc *emc,
unsigned long rate)
{
if (emc->prepared && !emc->bad_state) {
if (!emc->bad_state) {
/* shouldn't ever happen in practice */
dev_err(emc->dev, "timing configuration can't be reverted\n");
emc->bad_state = true;
@ -823,7 +847,13 @@ static int emc_clk_change_notify(struct notifier_block *nb,
switch (msg) {
case PRE_RATE_CHANGE:
/*
* Disable interrupt since read accesses are prohibited after
* stalling.
*/
disable_irq(emc->irq);
err = emc_prepare_timing_change(emc, cnd->new_rate);
enable_irq(emc->irq);
break;
case ABORT_RATE_CHANGE:
@ -1083,6 +1113,171 @@ static long emc_round_rate(unsigned long rate,
return timing->rate;
}
/*
* debugfs interface
*
* The memory controller driver exposes some files in debugfs that can be used
* to control the EMC frequency. The top-level directory can be found here:
*
* /sys/kernel/debug/emc
*
* It contains the following files:
*
* - available_rates: This file contains a list of valid, space-separated
* EMC frequencies.
*
* - min_rate: Writing a value to this file sets the given frequency as the
* floor of the permitted range. If this is higher than the currently
* configured EMC frequency, this will cause the frequency to be
* increased so that it stays within the valid range.
*
* - max_rate: Similarily to the min_rate file, writing a value to this file
* sets the given frequency as the ceiling of the permitted range. If
* the value is lower than the currently configured EMC frequency, this
* will cause the frequency to be decreased so that it stays within the
* valid range.
*/
static bool tegra_emc_validate_rate(struct tegra_emc *emc, unsigned long rate)
{
unsigned int i;
for (i = 0; i < emc->num_timings; i++)
if (rate == emc->timings[i].rate)
return true;
return false;
}
static int tegra_emc_debug_available_rates_show(struct seq_file *s, void *data)
{
struct tegra_emc *emc = s->private;
const char *prefix = "";
unsigned int i;
for (i = 0; i < emc->num_timings; i++) {
seq_printf(s, "%s%lu", prefix, emc->timings[i].rate);
prefix = " ";
}
seq_puts(s, "\n");
return 0;
}
static int tegra_emc_debug_available_rates_open(struct inode *inode,
struct file *file)
{
return single_open(file, tegra_emc_debug_available_rates_show,
inode->i_private);
}
static const struct file_operations tegra_emc_debug_available_rates_fops = {
.open = tegra_emc_debug_available_rates_open,
.read = seq_read,
.llseek = seq_lseek,
.release = single_release,
};
static int tegra_emc_debug_min_rate_get(void *data, u64 *rate)
{
struct tegra_emc *emc = data;
*rate = emc->debugfs.min_rate;
return 0;
}
static int tegra_emc_debug_min_rate_set(void *data, u64 rate)
{
struct tegra_emc *emc = data;
int err;
if (!tegra_emc_validate_rate(emc, rate))
return -EINVAL;
err = clk_set_min_rate(emc->clk, rate);
if (err < 0)
return err;
emc->debugfs.min_rate = rate;
return 0;
}
DEFINE_SIMPLE_ATTRIBUTE(tegra_emc_debug_min_rate_fops,
tegra_emc_debug_min_rate_get,
tegra_emc_debug_min_rate_set, "%llu\n");
static int tegra_emc_debug_max_rate_get(void *data, u64 *rate)
{
struct tegra_emc *emc = data;
*rate = emc->debugfs.max_rate;
return 0;
}
static int tegra_emc_debug_max_rate_set(void *data, u64 rate)
{
struct tegra_emc *emc = data;
int err;
if (!tegra_emc_validate_rate(emc, rate))
return -EINVAL;
err = clk_set_max_rate(emc->clk, rate);
if (err < 0)
return err;
emc->debugfs.max_rate = rate;
return 0;
}
DEFINE_SIMPLE_ATTRIBUTE(tegra_emc_debug_max_rate_fops,
tegra_emc_debug_max_rate_get,
tegra_emc_debug_max_rate_set, "%llu\n");
static void tegra_emc_debugfs_init(struct tegra_emc *emc)
{
struct device *dev = emc->dev;
unsigned int i;
int err;
emc->debugfs.min_rate = ULONG_MAX;
emc->debugfs.max_rate = 0;
for (i = 0; i < emc->num_timings; i++) {
if (emc->timings[i].rate < emc->debugfs.min_rate)
emc->debugfs.min_rate = emc->timings[i].rate;
if (emc->timings[i].rate > emc->debugfs.max_rate)
emc->debugfs.max_rate = emc->timings[i].rate;
}
err = clk_set_rate_range(emc->clk, emc->debugfs.min_rate,
emc->debugfs.max_rate);
if (err < 0) {
dev_err(dev, "failed to set rate range [%lu-%lu] for %pC\n",
emc->debugfs.min_rate, emc->debugfs.max_rate,
emc->clk);
}
emc->debugfs.root = debugfs_create_dir("emc", NULL);
if (!emc->debugfs.root) {
dev_err(emc->dev, "failed to create debugfs directory\n");
return;
}
debugfs_create_file("available_rates", S_IRUGO, emc->debugfs.root,
emc, &tegra_emc_debug_available_rates_fops);
debugfs_create_file("min_rate", S_IRUGO | S_IWUSR, emc->debugfs.root,
emc, &tegra_emc_debug_min_rate_fops);
debugfs_create_file("max_rate", S_IRUGO | S_IWUSR, emc->debugfs.root,
emc, &tegra_emc_debug_max_rate_fops);
}
static int tegra_emc_probe(struct platform_device *pdev)
{
struct platform_device *mc;
@ -1169,6 +1364,7 @@ static int tegra_emc_probe(struct platform_device *pdev)
}
platform_set_drvdata(pdev, emc);
tegra_emc_debugfs_init(emc);
return 0;
@ -1181,13 +1377,17 @@ unset_cb:
static int tegra_emc_suspend(struct device *dev)
{
struct tegra_emc *emc = dev_get_drvdata(dev);
int err;
/*
* Suspending in a bad state will hang machine. The "prepared" var
* shall be always false here unless it's a kernel bug that caused
* suspending in a wrong order.
*/
if (WARN_ON(emc->prepared) || emc->bad_state)
/* take exclusive control over the clock's rate */
err = clk_rate_exclusive_get(emc->clk);
if (err) {
dev_err(emc->dev, "failed to acquire clk: %d\n", err);
return err;
}
/* suspending in a bad state will hang machine */
if (WARN(emc->bad_state, "hardware in a bad state\n"))
return -EINVAL;
emc->bad_state = true;
@ -1202,6 +1402,8 @@ static int tegra_emc_resume(struct device *dev)
emc_setup_hw(emc);
emc->bad_state = false;
clk_rate_exclusive_put(emc->clk);
return 0;
}

Просмотреть файл

@ -74,7 +74,7 @@ config FSL_XGMAC_MDIO
config UCC_GETH
tristate "Freescale QE Gigabit Ethernet"
depends on QUICC_ENGINE
depends on QUICC_ENGINE && PPC32
select FSL_PQ_MDIO
select PHYLIB
---help---

Просмотреть файл

@ -84,8 +84,8 @@ static int uhdlc_init(struct ucc_hdlc_private *priv)
int ret, i;
void *bd_buffer;
dma_addr_t bd_dma_addr;
u32 riptr;
u32 tiptr;
s32 riptr;
s32 tiptr;
u32 gumr;
ut_info = priv->ut_info;
@ -195,7 +195,7 @@ static int uhdlc_init(struct ucc_hdlc_private *priv)
priv->ucc_pram_offset = qe_muram_alloc(sizeof(struct ucc_hdlc_param),
ALIGNMENT_OF_UCC_HDLC_PRAM);
if (IS_ERR_VALUE(priv->ucc_pram_offset)) {
if (priv->ucc_pram_offset < 0) {
dev_err(priv->dev, "Can not allocate MURAM for hdlc parameter.\n");
ret = -ENOMEM;
goto free_tx_bd;
@ -233,18 +233,23 @@ static int uhdlc_init(struct ucc_hdlc_private *priv)
/* Alloc riptr, tiptr */
riptr = qe_muram_alloc(32, 32);
if (IS_ERR_VALUE(riptr)) {
if (riptr < 0) {
dev_err(priv->dev, "Cannot allocate MURAM mem for Receive internal temp data pointer\n");
ret = -ENOMEM;
goto free_tx_skbuff;
}
tiptr = qe_muram_alloc(32, 32);
if (IS_ERR_VALUE(tiptr)) {
if (tiptr < 0) {
dev_err(priv->dev, "Cannot allocate MURAM mem for Transmit internal temp data pointer\n");
ret = -ENOMEM;
goto free_riptr;
}
if (riptr != (u16)riptr || tiptr != (u16)tiptr) {
dev_err(priv->dev, "MURAM allocation out of addressable range\n");
ret = -ENOMEM;
goto free_tiptr;
}
/* Set RIPTR, TIPTR */
iowrite16be(riptr, &priv->ucc_pram->riptr);
@ -623,8 +628,8 @@ static int ucc_hdlc_poll(struct napi_struct *napi, int budget)
if (howmany < budget) {
napi_complete_done(napi, howmany);
qe_setbits32(priv->uccf->p_uccm,
(UCCE_HDLC_RX_EVENTS | UCCE_HDLC_TX_EVENTS) << 16);
qe_setbits_be32(priv->uccf->p_uccm,
(UCCE_HDLC_RX_EVENTS | UCCE_HDLC_TX_EVENTS) << 16);
}
return howmany;
@ -730,8 +735,8 @@ static int uhdlc_open(struct net_device *dev)
static void uhdlc_memclean(struct ucc_hdlc_private *priv)
{
qe_muram_free(priv->ucc_pram->riptr);
qe_muram_free(priv->ucc_pram->tiptr);
qe_muram_free(ioread16be(&priv->ucc_pram->riptr));
qe_muram_free(ioread16be(&priv->ucc_pram->tiptr));
if (priv->rx_bd_base) {
dma_free_coherent(priv->dev,

Просмотреть файл

@ -98,7 +98,7 @@ struct ucc_hdlc_private {
unsigned short tx_ring_size;
unsigned short rx_ring_size;
u32 ucc_pram_offset;
s32 ucc_pram_offset;
unsigned short encoding;
unsigned short parity;

Просмотреть файл

@ -415,6 +415,42 @@ int of_cpu_node_to_id(struct device_node *cpu_node)
}
EXPORT_SYMBOL(of_cpu_node_to_id);
/**
* of_get_cpu_state_node - Get CPU's idle state node at the given index
*
* @cpu_node: The device node for the CPU
* @index: The index in the list of the idle states
*
* Two generic methods can be used to describe a CPU's idle states, either via
* a flattened description through the "cpu-idle-states" binding or via the
* hierarchical layout, using the "power-domains" and the "domain-idle-states"
* bindings. This function check for both and returns the idle state node for
* the requested index.
*
* In case an idle state node is found at @index, the refcount is incremented
* for it, so call of_node_put() on it when done. Returns NULL if not found.
*/
struct device_node *of_get_cpu_state_node(struct device_node *cpu_node,
int index)
{
struct of_phandle_args args;
int err;
err = of_parse_phandle_with_args(cpu_node, "power-domains",
"#power-domain-cells", 0, &args);
if (!err) {
struct device_node *state_node =
of_parse_phandle(args.np, "domain-idle-states", index);
of_node_put(args.np);
if (state_node)
return state_node;
}
return of_parse_phandle(cpu_node, "cpu-idle-states", index);
}
EXPORT_SYMBOL(of_get_cpu_state_node);
/**
* __of_device_is_compatible() - Check if the node matches given constraints
* @device: pointer to node

Просмотреть файл

@ -49,6 +49,13 @@ config RESET_BRCMSTB
This enables the reset controller driver for Broadcom STB SoCs using
a SUN_TOP_CTRL_SW_INIT style controller.
config RESET_BRCMSTB_RESCAL
bool "Broadcom STB RESCAL reset controller"
default ARCH_BRCMSTB || COMPILE_TEST
help
This enables the RESCAL reset controller for SATA, PCIe0, or PCIe1 on
BCM7216.
config RESET_HSDK
bool "Synopsys HSDK Reset Driver"
depends on HAS_IOMEM
@ -64,6 +71,15 @@ config RESET_IMX7
help
This enables the reset controller driver for i.MX7 SoCs.
config RESET_INTEL_GW
bool "Intel Reset Controller Driver"
depends on OF
select REGMAP_MMIO
help
This enables the reset controller driver for Intel Gateway SoCs.
Say Y to control the reset signals provided by reset controller.
Otherwise, say N.
config RESET_LANTIQ
bool "Lantiq XWAY Reset Driver" if COMPILE_TEST
default SOC_TYPE_XWAY
@ -89,6 +105,13 @@ config RESET_MESON_AUDIO_ARB
This enables the reset driver for Audio Memory Arbiter of
Amlogic's A113 based SoCs
config RESET_NPCM
bool "NPCM BMC Reset Driver" if COMPILE_TEST
default ARCH_NPCM
help
This enables the reset controller driver for Nuvoton NPCM
BMC SoCs.
config RESET_OXNAS
bool
@ -99,7 +122,7 @@ config RESET_PISTACHIO
This enables the reset driver for ImgTec Pistachio SoCs.
config RESET_QCOM_AOSS
bool "Qcom AOSS Reset Driver"
tristate "Qcom AOSS Reset Driver"
depends on ARCH_QCOM || COMPILE_TEST
help
This enables the AOSS (always on subsystem) reset driver

Просмотреть файл

@ -8,12 +8,15 @@ obj-$(CONFIG_RESET_ATH79) += reset-ath79.o
obj-$(CONFIG_RESET_AXS10X) += reset-axs10x.o
obj-$(CONFIG_RESET_BERLIN) += reset-berlin.o
obj-$(CONFIG_RESET_BRCMSTB) += reset-brcmstb.o
obj-$(CONFIG_RESET_BRCMSTB_RESCAL) += reset-brcmstb-rescal.o
obj-$(CONFIG_RESET_HSDK) += reset-hsdk.o
obj-$(CONFIG_RESET_IMX7) += reset-imx7.o
obj-$(CONFIG_RESET_INTEL_GW) += reset-intel-gw.o
obj-$(CONFIG_RESET_LANTIQ) += reset-lantiq.o
obj-$(CONFIG_RESET_LPC18XX) += reset-lpc18xx.o
obj-$(CONFIG_RESET_MESON) += reset-meson.o
obj-$(CONFIG_RESET_MESON_AUDIO_ARB) += reset-meson-audio-arb.o
obj-$(CONFIG_RESET_NPCM) += reset-npcm.o
obj-$(CONFIG_RESET_OXNAS) += reset-oxnas.o
obj-$(CONFIG_RESET_PISTACHIO) += reset-pistachio.o
obj-$(CONFIG_RESET_QCOM_AOSS) += reset-qcom-aoss.o

Просмотреть файл

@ -150,13 +150,14 @@ int devm_reset_controller_register(struct device *dev,
return -ENOMEM;
ret = reset_controller_register(rcdev);
if (!ret) {
*rcdevp = rcdev;
devres_add(dev, rcdevp);
} else {
if (ret) {
devres_free(rcdevp);
return ret;
}
*rcdevp = rcdev;
devres_add(dev, rcdevp);
return ret;
}
EXPORT_SYMBOL_GPL(devm_reset_controller_register);
@ -787,13 +788,14 @@ struct reset_control *__devm_reset_control_get(struct device *dev,
return ERR_PTR(-ENOMEM);
rstc = __reset_control_get(dev, id, index, shared, optional, acquired);
if (!IS_ERR_OR_NULL(rstc)) {
*ptr = rstc;
devres_add(dev, ptr);
} else {
if (IS_ERR_OR_NULL(rstc)) {
devres_free(ptr);
return rstc;
}
*ptr = rstc;
devres_add(dev, ptr);
return rstc;
}
EXPORT_SYMBOL_GPL(__devm_reset_control_get);
@ -919,22 +921,21 @@ EXPORT_SYMBOL_GPL(of_reset_control_array_get);
struct reset_control *
devm_reset_control_array_get(struct device *dev, bool shared, bool optional)
{
struct reset_control **devres;
struct reset_control *rstc;
struct reset_control **ptr, *rstc;
devres = devres_alloc(devm_reset_control_release, sizeof(*devres),
GFP_KERNEL);
if (!devres)
ptr = devres_alloc(devm_reset_control_release, sizeof(*ptr),
GFP_KERNEL);
if (!ptr)
return ERR_PTR(-ENOMEM);
rstc = of_reset_control_array_get(dev->of_node, shared, optional, true);
if (IS_ERR_OR_NULL(rstc)) {
devres_free(devres);
devres_free(ptr);
return rstc;
}
*devres = rstc;
devres_add(dev, devres);
*ptr = rstc;
devres_add(dev, ptr);
return rstc;
}

Просмотреть файл

@ -0,0 +1,107 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright (C) 2018-2020 Broadcom */
#include <linux/device.h>
#include <linux/iopoll.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/platform_device.h>
#include <linux/reset-controller.h>
#define BRCM_RESCAL_START 0x0
#define BRCM_RESCAL_START_BIT BIT(0)
#define BRCM_RESCAL_CTRL 0x4
#define BRCM_RESCAL_STATUS 0x8
#define BRCM_RESCAL_STATUS_BIT BIT(0)
struct brcm_rescal_reset {
void __iomem *base;
struct device *dev;
struct reset_controller_dev rcdev;
};
static int brcm_rescal_reset_set(struct reset_controller_dev *rcdev,
unsigned long id)
{
struct brcm_rescal_reset *data =
container_of(rcdev, struct brcm_rescal_reset, rcdev);
void __iomem *base = data->base;
u32 reg;
int ret;
reg = readl(base + BRCM_RESCAL_START);
writel(reg | BRCM_RESCAL_START_BIT, base + BRCM_RESCAL_START);
reg = readl(base + BRCM_RESCAL_START);
if (!(reg & BRCM_RESCAL_START_BIT)) {
dev_err(data->dev, "failed to start SATA/PCIe rescal\n");
return -EIO;
}
ret = readl_poll_timeout(base + BRCM_RESCAL_STATUS, reg,
!(reg & BRCM_RESCAL_STATUS_BIT), 100, 1000);
if (ret) {
dev_err(data->dev, "time out on SATA/PCIe rescal\n");
return ret;
}
reg = readl(base + BRCM_RESCAL_START);
writel(reg & ~BRCM_RESCAL_START_BIT, base + BRCM_RESCAL_START);
dev_dbg(data->dev, "SATA/PCIe rescal success\n");
return 0;
}
static int brcm_rescal_reset_xlate(struct reset_controller_dev *rcdev,
const struct of_phandle_args *reset_spec)
{
/* This is needed if #reset-cells == 0. */
return 0;
}
static const struct reset_control_ops brcm_rescal_reset_ops = {
.reset = brcm_rescal_reset_set,
};
static int brcm_rescal_reset_probe(struct platform_device *pdev)
{
struct brcm_rescal_reset *data;
struct resource *res;
data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL);
if (!data)
return -ENOMEM;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
data->base = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(data->base))
return PTR_ERR(data->base);
data->rcdev.owner = THIS_MODULE;
data->rcdev.nr_resets = 1;
data->rcdev.ops = &brcm_rescal_reset_ops;
data->rcdev.of_node = pdev->dev.of_node;
data->rcdev.of_xlate = brcm_rescal_reset_xlate;
data->dev = &pdev->dev;
return devm_reset_controller_register(&pdev->dev, &data->rcdev);
}
static const struct of_device_id brcm_rescal_reset_of_match[] = {
{ .compatible = "brcm,bcm7216-pcie-sata-rescal" },
{ },
};
MODULE_DEVICE_TABLE(of, brcm_rescal_reset_of_match);
static struct platform_driver brcm_rescal_reset_driver = {
.probe = brcm_rescal_reset_probe,
.driver = {
.name = "brcm-rescal-reset",
.of_match_table = brcm_rescal_reset_of_match,
}
};
module_platform_driver(brcm_rescal_reset_driver);
MODULE_AUTHOR("Broadcom");
MODULE_DESCRIPTION("Broadcom SATA/PCIe rescal reset controller");
MODULE_LICENSE("GPL v2");

Просмотреть файл

@ -0,0 +1,262 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (c) 2019 Intel Corporation.
* Lei Chuanhua <Chuanhua.lei@intel.com>
*/
#include <linux/bitfield.h>
#include <linux/init.h>
#include <linux/of_device.h>
#include <linux/platform_device.h>
#include <linux/reboot.h>
#include <linux/regmap.h>
#include <linux/reset-controller.h>
#define RCU_RST_STAT 0x0024
#define RCU_RST_REQ 0x0048
#define REG_OFFSET GENMASK(31, 16)
#define BIT_OFFSET GENMASK(15, 8)
#define STAT_BIT_OFFSET GENMASK(7, 0)
#define to_reset_data(x) container_of(x, struct intel_reset_data, rcdev)
struct intel_reset_soc {
bool legacy;
u32 reset_cell_count;
};
struct intel_reset_data {
struct reset_controller_dev rcdev;
struct notifier_block restart_nb;
const struct intel_reset_soc *soc_data;
struct regmap *regmap;
struct device *dev;
u32 reboot_id;
};
static const struct regmap_config intel_rcu_regmap_config = {
.name = "intel-reset",
.reg_bits = 32,
.reg_stride = 4,
.val_bits = 32,
.fast_io = true,
};
/*
* Reset status register offset relative to
* the reset control register(X) is X + 4
*/
static u32 id_to_reg_and_bit_offsets(struct intel_reset_data *data,
unsigned long id, u32 *rst_req,
u32 *req_bit, u32 *stat_bit)
{
*rst_req = FIELD_GET(REG_OFFSET, id);
*req_bit = FIELD_GET(BIT_OFFSET, id);
if (data->soc_data->legacy)
*stat_bit = FIELD_GET(STAT_BIT_OFFSET, id);
else
*stat_bit = *req_bit;
if (data->soc_data->legacy && *rst_req == RCU_RST_REQ)
return RCU_RST_STAT;
else
return *rst_req + 0x4;
}
static int intel_set_clr_bits(struct intel_reset_data *data, unsigned long id,
bool set)
{
u32 rst_req, req_bit, rst_stat, stat_bit, val;
int ret;
rst_stat = id_to_reg_and_bit_offsets(data, id, &rst_req,
&req_bit, &stat_bit);
val = set ? BIT(req_bit) : 0;
ret = regmap_update_bits(data->regmap, rst_req, BIT(req_bit), val);
if (ret)
return ret;
return regmap_read_poll_timeout(data->regmap, rst_stat, val,
set == !!(val & BIT(stat_bit)), 20,
200);
}
static int intel_assert_device(struct reset_controller_dev *rcdev,
unsigned long id)
{
struct intel_reset_data *data = to_reset_data(rcdev);
int ret;
ret = intel_set_clr_bits(data, id, true);
if (ret)
dev_err(data->dev, "Reset assert failed %d\n", ret);
return ret;
}
static int intel_deassert_device(struct reset_controller_dev *rcdev,
unsigned long id)
{
struct intel_reset_data *data = to_reset_data(rcdev);
int ret;
ret = intel_set_clr_bits(data, id, false);
if (ret)
dev_err(data->dev, "Reset deassert failed %d\n", ret);
return ret;
}
static int intel_reset_status(struct reset_controller_dev *rcdev,
unsigned long id)
{
struct intel_reset_data *data = to_reset_data(rcdev);
u32 rst_req, req_bit, rst_stat, stat_bit, val;
int ret;
rst_stat = id_to_reg_and_bit_offsets(data, id, &rst_req,
&req_bit, &stat_bit);
ret = regmap_read(data->regmap, rst_stat, &val);
if (ret)
return ret;
return !!(val & BIT(stat_bit));
}
static const struct reset_control_ops intel_reset_ops = {
.assert = intel_assert_device,
.deassert = intel_deassert_device,
.status = intel_reset_status,
};
static int intel_reset_xlate(struct reset_controller_dev *rcdev,
const struct of_phandle_args *spec)
{
struct intel_reset_data *data = to_reset_data(rcdev);
u32 id;
if (spec->args[1] > 31)
return -EINVAL;
id = FIELD_PREP(REG_OFFSET, spec->args[0]);
id |= FIELD_PREP(BIT_OFFSET, spec->args[1]);
if (data->soc_data->legacy) {
if (spec->args[2] > 31)
return -EINVAL;
id |= FIELD_PREP(STAT_BIT_OFFSET, spec->args[2]);
}
return id;
}
static int intel_reset_restart_handler(struct notifier_block *nb,
unsigned long action, void *data)
{
struct intel_reset_data *reset_data;
reset_data = container_of(nb, struct intel_reset_data, restart_nb);
intel_assert_device(&reset_data->rcdev, reset_data->reboot_id);
return NOTIFY_DONE;
}
static int intel_reset_probe(struct platform_device *pdev)
{
struct device_node *np = pdev->dev.of_node;
struct device *dev = &pdev->dev;
struct intel_reset_data *data;
void __iomem *base;
u32 rb_id[3];
int ret;
data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL);
if (!data)
return -ENOMEM;
data->soc_data = of_device_get_match_data(dev);
if (!data->soc_data)
return -ENODEV;
base = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(base))
return PTR_ERR(base);
data->regmap = devm_regmap_init_mmio(dev, base,
&intel_rcu_regmap_config);
if (IS_ERR(data->regmap)) {
dev_err(dev, "regmap initialization failed\n");
return PTR_ERR(data->regmap);
}
ret = device_property_read_u32_array(dev, "intel,global-reset", rb_id,
data->soc_data->reset_cell_count);
if (ret) {
dev_err(dev, "Failed to get global reset offset!\n");
return ret;
}
data->dev = dev;
data->rcdev.of_node = np;
data->rcdev.owner = dev->driver->owner;
data->rcdev.ops = &intel_reset_ops;
data->rcdev.of_xlate = intel_reset_xlate;
data->rcdev.of_reset_n_cells = data->soc_data->reset_cell_count;
ret = devm_reset_controller_register(&pdev->dev, &data->rcdev);
if (ret)
return ret;
data->reboot_id = FIELD_PREP(REG_OFFSET, rb_id[0]);
data->reboot_id |= FIELD_PREP(BIT_OFFSET, rb_id[1]);
if (data->soc_data->legacy)
data->reboot_id |= FIELD_PREP(STAT_BIT_OFFSET, rb_id[2]);
data->restart_nb.notifier_call = intel_reset_restart_handler;
data->restart_nb.priority = 128;
register_restart_handler(&data->restart_nb);
return 0;
}
static const struct intel_reset_soc xrx200_data = {
.legacy = true,
.reset_cell_count = 3,
};
static const struct intel_reset_soc lgm_data = {
.legacy = false,
.reset_cell_count = 2,
};
static const struct of_device_id intel_reset_match[] = {
{ .compatible = "intel,rcu-lgm", .data = &lgm_data },
{ .compatible = "intel,rcu-xrx200", .data = &xrx200_data },
{}
};
static struct platform_driver intel_reset_driver = {
.probe = intel_reset_probe,
.driver = {
.name = "intel-reset",
.of_match_table = intel_reset_match,
},
};
static int __init intel_reset_init(void)
{
return platform_driver_register(&intel_reset_driver);
}
/*
* RCU is system core entity which is in Always On Domain whose clocks
* or resource initialization happens in system core initialization.
* Also, it is required for most of the platform or architecture
* specific devices to perform reset operation as part of initialization.
* So perform RCU as post core initialization.
*/
postcore_initcall(intel_reset_init);

291
drivers/reset/reset-npcm.c Normal file
Просмотреть файл

@ -0,0 +1,291 @@
// SPDX-License-Identifier: GPL-2.0
// Copyright (c) 2019 Nuvoton Technology corporation.
#include <linux/delay.h>
#include <linux/err.h>
#include <linux/io.h>
#include <linux/init.h>
#include <linux/of.h>
#include <linux/of_device.h>
#include <linux/platform_device.h>
#include <linux/reboot.h>
#include <linux/reset-controller.h>
#include <linux/spinlock.h>
#include <linux/mfd/syscon.h>
#include <linux/regmap.h>
#include <linux/of_address.h>
/* NPCM7xx GCR registers */
#define NPCM_MDLR_OFFSET 0x7C
#define NPCM_MDLR_USBD0 BIT(9)
#define NPCM_MDLR_USBD1 BIT(8)
#define NPCM_MDLR_USBD2_4 BIT(21)
#define NPCM_MDLR_USBD5_9 BIT(22)
#define NPCM_USB1PHYCTL_OFFSET 0x140
#define NPCM_USB2PHYCTL_OFFSET 0x144
#define NPCM_USBXPHYCTL_RS BIT(28)
/* NPCM7xx Reset registers */
#define NPCM_SWRSTR 0x14
#define NPCM_SWRST BIT(2)
#define NPCM_IPSRST1 0x20
#define NPCM_IPSRST1_USBD1 BIT(5)
#define NPCM_IPSRST1_USBD2 BIT(8)
#define NPCM_IPSRST1_USBD3 BIT(25)
#define NPCM_IPSRST1_USBD4 BIT(22)
#define NPCM_IPSRST1_USBD5 BIT(23)
#define NPCM_IPSRST1_USBD6 BIT(24)
#define NPCM_IPSRST2 0x24
#define NPCM_IPSRST2_USB_HOST BIT(26)
#define NPCM_IPSRST3 0x34
#define NPCM_IPSRST3_USBD0 BIT(4)
#define NPCM_IPSRST3_USBD7 BIT(5)
#define NPCM_IPSRST3_USBD8 BIT(6)
#define NPCM_IPSRST3_USBD9 BIT(7)
#define NPCM_IPSRST3_USBPHY1 BIT(24)
#define NPCM_IPSRST3_USBPHY2 BIT(25)
#define NPCM_RC_RESETS_PER_REG 32
#define NPCM_MASK_RESETS GENMASK(4, 0)
struct npcm_rc_data {
struct reset_controller_dev rcdev;
struct notifier_block restart_nb;
u32 sw_reset_number;
void __iomem *base;
spinlock_t lock;
};
#define to_rc_data(p) container_of(p, struct npcm_rc_data, rcdev)
static int npcm_rc_restart(struct notifier_block *nb, unsigned long mode,
void *cmd)
{
struct npcm_rc_data *rc = container_of(nb, struct npcm_rc_data,
restart_nb);
writel(NPCM_SWRST << rc->sw_reset_number, rc->base + NPCM_SWRSTR);
mdelay(1000);
pr_emerg("%s: unable to restart system\n", __func__);
return NOTIFY_DONE;
}
static int npcm_rc_setclear_reset(struct reset_controller_dev *rcdev,
unsigned long id, bool set)
{
struct npcm_rc_data *rc = to_rc_data(rcdev);
unsigned int rst_bit = BIT(id & NPCM_MASK_RESETS);
unsigned int ctrl_offset = id >> 8;
unsigned long flags;
u32 stat;
spin_lock_irqsave(&rc->lock, flags);
stat = readl(rc->base + ctrl_offset);
if (set)
writel(stat | rst_bit, rc->base + ctrl_offset);
else
writel(stat & ~rst_bit, rc->base + ctrl_offset);
spin_unlock_irqrestore(&rc->lock, flags);
return 0;
}
static int npcm_rc_assert(struct reset_controller_dev *rcdev, unsigned long id)
{
return npcm_rc_setclear_reset(rcdev, id, true);
}
static int npcm_rc_deassert(struct reset_controller_dev *rcdev,
unsigned long id)
{
return npcm_rc_setclear_reset(rcdev, id, false);
}
static int npcm_rc_status(struct reset_controller_dev *rcdev,
unsigned long id)
{
struct npcm_rc_data *rc = to_rc_data(rcdev);
unsigned int rst_bit = BIT(id & NPCM_MASK_RESETS);
unsigned int ctrl_offset = id >> 8;
return (readl(rc->base + ctrl_offset) & rst_bit);
}
static int npcm_reset_xlate(struct reset_controller_dev *rcdev,
const struct of_phandle_args *reset_spec)
{
unsigned int offset, bit;
offset = reset_spec->args[0];
if (offset != NPCM_IPSRST1 && offset != NPCM_IPSRST2 &&
offset != NPCM_IPSRST3) {
dev_err(rcdev->dev, "Error reset register (0x%x)\n", offset);
return -EINVAL;
}
bit = reset_spec->args[1];
if (bit >= NPCM_RC_RESETS_PER_REG) {
dev_err(rcdev->dev, "Error reset number (%d)\n", bit);
return -EINVAL;
}
return (offset << 8) | bit;
}
static const struct of_device_id npcm_rc_match[] = {
{ .compatible = "nuvoton,npcm750-reset",
.data = (void *)"nuvoton,npcm750-gcr" },
{ }
};
/*
* The following procedure should be observed in USB PHY, USB device and
* USB host initialization at BMC boot
*/
static int npcm_usb_reset(struct platform_device *pdev, struct npcm_rc_data *rc)
{
u32 mdlr, iprst1, iprst2, iprst3;
struct device *dev = &pdev->dev;
struct regmap *gcr_regmap;
u32 ipsrst1_bits = 0;
u32 ipsrst2_bits = NPCM_IPSRST2_USB_HOST;
u32 ipsrst3_bits = 0;
const char *gcr_dt;
gcr_dt = (const char *)
of_match_device(dev->driver->of_match_table, dev)->data;
gcr_regmap = syscon_regmap_lookup_by_compatible(gcr_dt);
if (IS_ERR(gcr_regmap)) {
dev_err(&pdev->dev, "Failed to find %s\n", gcr_dt);
return PTR_ERR(gcr_regmap);
}
/* checking which USB device is enabled */
regmap_read(gcr_regmap, NPCM_MDLR_OFFSET, &mdlr);
if (!(mdlr & NPCM_MDLR_USBD0))
ipsrst3_bits |= NPCM_IPSRST3_USBD0;
if (!(mdlr & NPCM_MDLR_USBD1))
ipsrst1_bits |= NPCM_IPSRST1_USBD1;
if (!(mdlr & NPCM_MDLR_USBD2_4))
ipsrst1_bits |= (NPCM_IPSRST1_USBD2 |
NPCM_IPSRST1_USBD3 |
NPCM_IPSRST1_USBD4);
if (!(mdlr & NPCM_MDLR_USBD0)) {
ipsrst1_bits |= (NPCM_IPSRST1_USBD5 |
NPCM_IPSRST1_USBD6);
ipsrst3_bits |= (NPCM_IPSRST3_USBD7 |
NPCM_IPSRST3_USBD8 |
NPCM_IPSRST3_USBD9);
}
/* assert reset USB PHY and USB devices */
iprst1 = readl(rc->base + NPCM_IPSRST1);
iprst2 = readl(rc->base + NPCM_IPSRST2);
iprst3 = readl(rc->base + NPCM_IPSRST3);
iprst1 |= ipsrst1_bits;
iprst2 |= ipsrst2_bits;
iprst3 |= (ipsrst3_bits | NPCM_IPSRST3_USBPHY1 |
NPCM_IPSRST3_USBPHY2);
writel(iprst1, rc->base + NPCM_IPSRST1);
writel(iprst2, rc->base + NPCM_IPSRST2);
writel(iprst3, rc->base + NPCM_IPSRST3);
/* clear USB PHY RS bit */
regmap_update_bits(gcr_regmap, NPCM_USB1PHYCTL_OFFSET,
NPCM_USBXPHYCTL_RS, 0);
regmap_update_bits(gcr_regmap, NPCM_USB2PHYCTL_OFFSET,
NPCM_USBXPHYCTL_RS, 0);
/* deassert reset USB PHY */
iprst3 &= ~(NPCM_IPSRST3_USBPHY1 | NPCM_IPSRST3_USBPHY2);
writel(iprst3, rc->base + NPCM_IPSRST3);
udelay(50);
/* set USB PHY RS bit */
regmap_update_bits(gcr_regmap, NPCM_USB1PHYCTL_OFFSET,
NPCM_USBXPHYCTL_RS, NPCM_USBXPHYCTL_RS);
regmap_update_bits(gcr_regmap, NPCM_USB2PHYCTL_OFFSET,
NPCM_USBXPHYCTL_RS, NPCM_USBXPHYCTL_RS);
/* deassert reset USB devices*/
iprst1 &= ~ipsrst1_bits;
iprst2 &= ~ipsrst2_bits;
iprst3 &= ~ipsrst3_bits;
writel(iprst1, rc->base + NPCM_IPSRST1);
writel(iprst2, rc->base + NPCM_IPSRST2);
writel(iprst3, rc->base + NPCM_IPSRST3);
return 0;
}
static const struct reset_control_ops npcm_rc_ops = {
.assert = npcm_rc_assert,
.deassert = npcm_rc_deassert,
.status = npcm_rc_status,
};
static int npcm_rc_probe(struct platform_device *pdev)
{
struct npcm_rc_data *rc;
int ret;
rc = devm_kzalloc(&pdev->dev, sizeof(*rc), GFP_KERNEL);
if (!rc)
return -ENOMEM;
rc->base = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(rc->base))
return PTR_ERR(rc->base);
spin_lock_init(&rc->lock);
rc->rcdev.owner = THIS_MODULE;
rc->rcdev.ops = &npcm_rc_ops;
rc->rcdev.of_node = pdev->dev.of_node;
rc->rcdev.of_reset_n_cells = 2;
rc->rcdev.of_xlate = npcm_reset_xlate;
platform_set_drvdata(pdev, rc);
ret = devm_reset_controller_register(&pdev->dev, &rc->rcdev);
if (ret) {
dev_err(&pdev->dev, "unable to register device\n");
return ret;
}
if (npcm_usb_reset(pdev, rc))
dev_warn(&pdev->dev, "NPCM USB reset failed, can cause issues with UDC and USB host\n");
if (!of_property_read_u32(pdev->dev.of_node, "nuvoton,sw-reset-number",
&rc->sw_reset_number)) {
if (rc->sw_reset_number && rc->sw_reset_number < 5) {
rc->restart_nb.priority = 192,
rc->restart_nb.notifier_call = npcm_rc_restart,
ret = register_restart_handler(&rc->restart_nb);
if (ret)
dev_warn(&pdev->dev, "failed to register restart handler\n");
}
}
return ret;
}
static struct platform_driver npcm_rc_driver = {
.probe = npcm_rc_probe,
.driver = {
.name = "npcm-reset",
.of_match_table = npcm_rc_match,
.suppress_bind_attrs = true,
},
};
builtin_platform_driver(npcm_rc_driver);

Просмотреть файл

@ -118,6 +118,7 @@ static const struct of_device_id qcom_aoss_reset_of_match[] = {
{ .compatible = "qcom,sdm845-aoss-cc", .data = &sdm845_aoss_desc },
{}
};
MODULE_DEVICE_TABLE(of, qcom_aoss_reset_of_match);
static struct platform_driver qcom_aoss_reset_driver = {
.probe = qcom_aoss_reset_probe,
@ -127,7 +128,7 @@ static struct platform_driver qcom_aoss_reset_driver = {
},
};
builtin_platform_driver(qcom_aoss_reset_driver);
module_platform_driver(qcom_aoss_reset_driver);
MODULE_DESCRIPTION("Qualcomm AOSS Reset Driver");
MODULE_LICENSE("GPL v2");

Просмотреть файл

@ -108,7 +108,7 @@ static int scmi_reset_probe(struct scmi_device *sdev)
}
static const struct scmi_device_id scmi_id_table[] = {
{ SCMI_PROTOCOL_RESET },
{ SCMI_PROTOCOL_RESET, "reset" },
{ },
};
MODULE_DEVICE_TABLE(scmi, scmi_id_table);

Просмотреть файл

@ -193,8 +193,8 @@ static const struct uniphier_reset_data uniphier_pro5_sd_reset_data[] = {
#define UNIPHIER_PERI_RESET_FI2C(id, ch) \
UNIPHIER_RESETX((id), 0x114, 24 + (ch))
#define UNIPHIER_PERI_RESET_SCSSI(id) \
UNIPHIER_RESETX((id), 0x110, 17)
#define UNIPHIER_PERI_RESET_SCSSI(id, ch) \
UNIPHIER_RESETX((id), 0x110, 17 + (ch))
#define UNIPHIER_PERI_RESET_MCSSI(id) \
UNIPHIER_RESETX((id), 0x114, 14)
@ -209,7 +209,7 @@ static const struct uniphier_reset_data uniphier_ld4_peri_reset_data[] = {
UNIPHIER_PERI_RESET_I2C(6, 2),
UNIPHIER_PERI_RESET_I2C(7, 3),
UNIPHIER_PERI_RESET_I2C(8, 4),
UNIPHIER_PERI_RESET_SCSSI(11),
UNIPHIER_PERI_RESET_SCSSI(11, 0),
UNIPHIER_RESET_END,
};
@ -225,8 +225,11 @@ static const struct uniphier_reset_data uniphier_pro4_peri_reset_data[] = {
UNIPHIER_PERI_RESET_FI2C(8, 4),
UNIPHIER_PERI_RESET_FI2C(9, 5),
UNIPHIER_PERI_RESET_FI2C(10, 6),
UNIPHIER_PERI_RESET_SCSSI(11),
UNIPHIER_PERI_RESET_MCSSI(12),
UNIPHIER_PERI_RESET_SCSSI(11, 0),
UNIPHIER_PERI_RESET_SCSSI(12, 1),
UNIPHIER_PERI_RESET_SCSSI(13, 2),
UNIPHIER_PERI_RESET_SCSSI(14, 3),
UNIPHIER_PERI_RESET_MCSSI(15),
UNIPHIER_RESET_END,
};

Просмотреть файл

@ -63,7 +63,7 @@ static const int b15_cpubiuctrl_regs[] = {
[CPU_WRITEBACK_CTRL_REG] = -1,
};
/* Odd cases, e.g: 7260 */
/* Odd cases, e.g: 7260A0 */
static const int b53_cpubiuctrl_no_wb_regs[] = {
[CPU_CREDIT_REG] = 0x0b0,
[CPU_MCP_FLOW_REG] = 0x0b4,
@ -76,6 +76,12 @@ static const int b53_cpubiuctrl_regs[] = {
[CPU_WRITEBACK_CTRL_REG] = 0x22c,
};
static const int a72_cpubiuctrl_regs[] = {
[CPU_CREDIT_REG] = 0x18,
[CPU_MCP_FLOW_REG] = 0x1c,
[CPU_WRITEBACK_CTRL_REG] = 0x20,
};
#define NUM_CPU_BIUCTRL_REGS 3
static int __init mcp_write_pairing_set(void)
@ -101,25 +107,29 @@ static int __init mcp_write_pairing_set(void)
return 0;
}
static const u32 b53_mach_compat[] = {
static const u32 a72_b53_mach_compat[] = {
0x7211,
0x7216,
0x7255,
0x7260,
0x7268,
0x7271,
0x7278,
};
static void __init mcp_b53_set(void)
static void __init mcp_a72_b53_set(void)
{
unsigned int i;
u32 reg;
reg = brcmstb_get_family_id();
for (i = 0; i < ARRAY_SIZE(b53_mach_compat); i++) {
if (BRCM_ID(reg) == b53_mach_compat[i])
for (i = 0; i < ARRAY_SIZE(a72_b53_mach_compat); i++) {
if (BRCM_ID(reg) == a72_b53_mach_compat[i])
break;
}
if (i == ARRAY_SIZE(b53_mach_compat))
if (i == ARRAY_SIZE(a72_b53_mach_compat))
return;
/* Set all 3 MCP interfaces to 8 credits */
@ -157,6 +167,7 @@ static void __init mcp_b53_set(void)
static int __init setup_hifcpubiuctrl_regs(struct device_node *np)
{
struct device_node *cpu_dn;
u32 family_id;
int ret = 0;
cpubiuctrl_base = of_iomap(np, 0);
@ -179,13 +190,16 @@ static int __init setup_hifcpubiuctrl_regs(struct device_node *np)
cpubiuctrl_regs = b15_cpubiuctrl_regs;
else if (of_device_is_compatible(cpu_dn, "brcm,brahma-b53"))
cpubiuctrl_regs = b53_cpubiuctrl_regs;
else if (of_device_is_compatible(cpu_dn, "arm,cortex-a72"))
cpubiuctrl_regs = a72_cpubiuctrl_regs;
else {
pr_err("unsupported CPU\n");
ret = -EINVAL;
}
of_node_put(cpu_dn);
if (BRCM_ID(brcmstb_get_family_id()) == 0x7260)
family_id = brcmstb_get_family_id();
if (BRCM_ID(family_id) == 0x7260 && BRCM_REV(family_id) == 0)
cpubiuctrl_regs = b53_cpubiuctrl_no_wb_regs;
out:
of_node_put(np);
@ -248,7 +262,7 @@ static int __init brcmstb_biuctrl_init(void)
return ret;
}
mcp_b53_set();
mcp_a72_b53_set();
#ifdef CONFIG_PM_SLEEP
register_syscore_ops(&brcmstb_cpu_credit_syscore_ops);
#endif

Просмотреть файл

@ -5,7 +5,8 @@
config QUICC_ENGINE
bool "QUICC Engine (QE) framework support"
depends on FSL_SOC && PPC32
depends on OF && HAS_IOMEM
depends on PPC || ARM || ARM64 || COMPILE_TEST
select GENERIC_ALLOCATOR
select CRC32
help

Просмотреть файл

@ -41,13 +41,13 @@ static void qe_gpio_save_regs(struct of_mm_gpio_chip *mm_gc)
container_of(mm_gc, struct qe_gpio_chip, mm_gc);
struct qe_pio_regs __iomem *regs = mm_gc->regs;
qe_gc->cpdata = in_be32(&regs->cpdata);
qe_gc->cpdata = qe_ioread32be(&regs->cpdata);
qe_gc->saved_regs.cpdata = qe_gc->cpdata;
qe_gc->saved_regs.cpdir1 = in_be32(&regs->cpdir1);
qe_gc->saved_regs.cpdir2 = in_be32(&regs->cpdir2);
qe_gc->saved_regs.cppar1 = in_be32(&regs->cppar1);
qe_gc->saved_regs.cppar2 = in_be32(&regs->cppar2);
qe_gc->saved_regs.cpodr = in_be32(&regs->cpodr);
qe_gc->saved_regs.cpdir1 = qe_ioread32be(&regs->cpdir1);
qe_gc->saved_regs.cpdir2 = qe_ioread32be(&regs->cpdir2);
qe_gc->saved_regs.cppar1 = qe_ioread32be(&regs->cppar1);
qe_gc->saved_regs.cppar2 = qe_ioread32be(&regs->cppar2);
qe_gc->saved_regs.cpodr = qe_ioread32be(&regs->cpodr);
}
static int qe_gpio_get(struct gpio_chip *gc, unsigned int gpio)
@ -56,7 +56,7 @@ static int qe_gpio_get(struct gpio_chip *gc, unsigned int gpio)
struct qe_pio_regs __iomem *regs = mm_gc->regs;
u32 pin_mask = 1 << (QE_PIO_PINS - 1 - gpio);
return !!(in_be32(&regs->cpdata) & pin_mask);
return !!(qe_ioread32be(&regs->cpdata) & pin_mask);
}
static void qe_gpio_set(struct gpio_chip *gc, unsigned int gpio, int val)
@ -74,7 +74,7 @@ static void qe_gpio_set(struct gpio_chip *gc, unsigned int gpio, int val)
else
qe_gc->cpdata &= ~pin_mask;
out_be32(&regs->cpdata, qe_gc->cpdata);
qe_iowrite32be(qe_gc->cpdata, &regs->cpdata);
spin_unlock_irqrestore(&qe_gc->lock, flags);
}
@ -101,7 +101,7 @@ static void qe_gpio_set_multiple(struct gpio_chip *gc,
}
}
out_be32(&regs->cpdata, qe_gc->cpdata);
qe_iowrite32be(qe_gc->cpdata, &regs->cpdata);
spin_unlock_irqrestore(&qe_gc->lock, flags);
}
@ -160,7 +160,6 @@ struct qe_pin *qe_pin_request(struct device_node *np, int index)
{
struct qe_pin *qe_pin;
struct gpio_chip *gc;
struct of_mm_gpio_chip *mm_gc;
struct qe_gpio_chip *qe_gc;
int err;
unsigned long flags;
@ -186,7 +185,6 @@ struct qe_pin *qe_pin_request(struct device_node *np, int index)
goto err0;
}
mm_gc = to_of_mm_gpio_chip(gc);
qe_gc = gpiochip_get_data(gc);
spin_lock_irqsave(&qe_gc->lock, flags);
@ -255,11 +253,15 @@ void qe_pin_set_dedicated(struct qe_pin *qe_pin)
spin_lock_irqsave(&qe_gc->lock, flags);
if (second_reg) {
clrsetbits_be32(&regs->cpdir2, mask2, sregs->cpdir2 & mask2);
clrsetbits_be32(&regs->cppar2, mask2, sregs->cppar2 & mask2);
qe_clrsetbits_be32(&regs->cpdir2, mask2,
sregs->cpdir2 & mask2);
qe_clrsetbits_be32(&regs->cppar2, mask2,
sregs->cppar2 & mask2);
} else {
clrsetbits_be32(&regs->cpdir1, mask2, sregs->cpdir1 & mask2);
clrsetbits_be32(&regs->cppar1, mask2, sregs->cppar1 & mask2);
qe_clrsetbits_be32(&regs->cpdir1, mask2,
sregs->cpdir1 & mask2);
qe_clrsetbits_be32(&regs->cppar1, mask2,
sregs->cppar1 & mask2);
}
if (sregs->cpdata & mask1)
@ -267,8 +269,8 @@ void qe_pin_set_dedicated(struct qe_pin *qe_pin)
else
qe_gc->cpdata &= ~mask1;
out_be32(&regs->cpdata, qe_gc->cpdata);
clrsetbits_be32(&regs->cpodr, mask1, sregs->cpodr & mask1);
qe_iowrite32be(qe_gc->cpdata, &regs->cpdata);
qe_clrsetbits_be32(&regs->cpodr, mask1, sregs->cpodr & mask1);
spin_unlock_irqrestore(&qe_gc->lock, flags);
}

Просмотреть файл

@ -22,16 +22,12 @@
#include <linux/module.h>
#include <linux/delay.h>
#include <linux/ioport.h>
#include <linux/iopoll.h>
#include <linux/crc32.h>
#include <linux/mod_devicetable.h>
#include <linux/of_platform.h>
#include <asm/irq.h>
#include <asm/page.h>
#include <asm/pgtable.h>
#include <soc/fsl/qe/immap_qe.h>
#include <soc/fsl/qe/qe.h>
#include <asm/prom.h>
#include <asm/rheap.h>
static void qe_snums_init(void);
static int qe_sdma_init(void);
@ -108,11 +104,12 @@ int qe_issue_cmd(u32 cmd, u32 device, u8 mcn_protocol, u32 cmd_input)
{
unsigned long flags;
u8 mcn_shift = 0, dev_shift = 0;
u32 ret;
u32 val;
int ret;
spin_lock_irqsave(&qe_lock, flags);
if (cmd == QE_RESET) {
out_be32(&qe_immr->cp.cecr, (u32) (cmd | QE_CR_FLG));
qe_iowrite32be((u32)(cmd | QE_CR_FLG), &qe_immr->cp.cecr);
} else {
if (cmd == QE_ASSIGN_PAGE) {
/* Here device is the SNUM, not sub-block */
@ -129,20 +126,18 @@ int qe_issue_cmd(u32 cmd, u32 device, u8 mcn_protocol, u32 cmd_input)
mcn_shift = QE_CR_MCN_NORMAL_SHIFT;
}
out_be32(&qe_immr->cp.cecdr, cmd_input);
out_be32(&qe_immr->cp.cecr,
(cmd | QE_CR_FLG | ((u32) device << dev_shift) | (u32)
mcn_protocol << mcn_shift));
qe_iowrite32be(cmd_input, &qe_immr->cp.cecdr);
qe_iowrite32be((cmd | QE_CR_FLG | ((u32)device << dev_shift) | (u32)mcn_protocol << mcn_shift),
&qe_immr->cp.cecr);
}
/* wait for the QE_CR_FLG to clear */
ret = spin_event_timeout((in_be32(&qe_immr->cp.cecr) & QE_CR_FLG) == 0,
100, 0);
/* On timeout (e.g. failure), the expression will be false (ret == 0),
otherwise it will be true (ret == 1). */
ret = readx_poll_timeout_atomic(qe_ioread32be, &qe_immr->cp.cecr, val,
(val & QE_CR_FLG) == 0, 0, 100);
/* On timeout, ret is -ETIMEDOUT, otherwise it will be 0. */
spin_unlock_irqrestore(&qe_lock, flags);
return ret == 1;
return ret == 0;
}
EXPORT_SYMBOL(qe_issue_cmd);
@ -164,8 +159,7 @@ static unsigned int brg_clk = 0;
unsigned int qe_get_brg_clk(void)
{
struct device_node *qe;
int size;
const u32 *prop;
u32 brg;
unsigned int mod;
if (brg_clk)
@ -175,9 +169,8 @@ unsigned int qe_get_brg_clk(void)
if (!qe)
return brg_clk;
prop = of_get_property(qe, "brg-frequency", &size);
if (prop && size == sizeof(*prop))
brg_clk = *prop;
if (!of_property_read_u32(qe, "brg-frequency", &brg))
brg_clk = brg;
of_node_put(qe);
@ -197,6 +190,14 @@ EXPORT_SYMBOL(qe_get_brg_clk);
#define PVR_VER_836x 0x8083
#define PVR_VER_832x 0x8084
static bool qe_general4_errata(void)
{
#ifdef CONFIG_PPC32
return pvr_version_is(PVR_VER_836x) || pvr_version_is(PVR_VER_832x);
#endif
return false;
}
/* Program the BRG to the given sampling rate and multiplier
*
* @brg: the BRG, QE_BRG1 - QE_BRG16
@ -223,14 +224,14 @@ int qe_setbrg(enum qe_clock brg, unsigned int rate, unsigned int multiplier)
/* Errata QE_General4, which affects some MPC832x and MPC836x SOCs, says
that the BRG divisor must be even if you're not using divide-by-16
mode. */
if (pvr_version_is(PVR_VER_836x) || pvr_version_is(PVR_VER_832x))
if (qe_general4_errata())
if (!div16 && (divisor & 1) && (divisor > 3))
divisor++;
tempval = ((divisor - 1) << QE_BRGC_DIVISOR_SHIFT) |
QE_BRGC_ENABLE | div16;
out_be32(&qe_immr->brg.brgc[brg - QE_BRG1], tempval);
qe_iowrite32be(tempval, &qe_immr->brg.brgc[brg - QE_BRG1]);
return 0;
}
@ -364,22 +365,20 @@ EXPORT_SYMBOL(qe_put_snum);
static int qe_sdma_init(void)
{
struct sdma __iomem *sdma = &qe_immr->sdma;
static unsigned long sdma_buf_offset = (unsigned long)-ENOMEM;
if (!sdma)
return -ENODEV;
static s32 sdma_buf_offset = -ENOMEM;
/* allocate 2 internal temporary buffers (512 bytes size each) for
* the SDMA */
if (IS_ERR_VALUE(sdma_buf_offset)) {
if (sdma_buf_offset < 0) {
sdma_buf_offset = qe_muram_alloc(512 * 2, 4096);
if (IS_ERR_VALUE(sdma_buf_offset))
if (sdma_buf_offset < 0)
return -ENOMEM;
}
out_be32(&sdma->sdebcr, (u32) sdma_buf_offset & QE_SDEBCR_BA_MASK);
out_be32(&sdma->sdmr, (QE_SDMR_GLB_1_MSK |
(0x1 << QE_SDMR_CEN_SHIFT)));
qe_iowrite32be((u32)sdma_buf_offset & QE_SDEBCR_BA_MASK,
&sdma->sdebcr);
qe_iowrite32be((QE_SDMR_GLB_1_MSK | (0x1 << QE_SDMR_CEN_SHIFT)),
&sdma->sdmr);
return 0;
}
@ -417,14 +416,14 @@ static void qe_upload_microcode(const void *base,
"uploading microcode '%s'\n", ucode->id);
/* Use auto-increment */
out_be32(&qe_immr->iram.iadd, be32_to_cpu(ucode->iram_offset) |
QE_IRAM_IADD_AIE | QE_IRAM_IADD_BADDR);
qe_iowrite32be(be32_to_cpu(ucode->iram_offset) | QE_IRAM_IADD_AIE | QE_IRAM_IADD_BADDR,
&qe_immr->iram.iadd);
for (i = 0; i < be32_to_cpu(ucode->count); i++)
out_be32(&qe_immr->iram.idata, be32_to_cpu(code[i]));
qe_iowrite32be(be32_to_cpu(code[i]), &qe_immr->iram.idata);
/* Set I-RAM Ready Register */
out_be32(&qe_immr->iram.iready, be32_to_cpu(QE_IRAM_READY));
qe_iowrite32be(be32_to_cpu(QE_IRAM_READY), &qe_immr->iram.iready);
}
/*
@ -509,7 +508,7 @@ int qe_upload_firmware(const struct qe_firmware *firmware)
* If the microcode calls for it, split the I-RAM.
*/
if (!firmware->split)
setbits16(&qe_immr->cp.cercr, QE_CP_CERCR_CIR);
qe_setbits_be16(&qe_immr->cp.cercr, QE_CP_CERCR_CIR);
if (firmware->soc.model)
printk(KERN_INFO
@ -543,11 +542,13 @@ int qe_upload_firmware(const struct qe_firmware *firmware)
u32 trap = be32_to_cpu(ucode->traps[j]);
if (trap)
out_be32(&qe_immr->rsp[i].tibcr[j], trap);
qe_iowrite32be(trap,
&qe_immr->rsp[i].tibcr[j]);
}
/* Enable traps */
out_be32(&qe_immr->rsp[i].eccr, be32_to_cpu(ucode->eccr));
qe_iowrite32be(be32_to_cpu(ucode->eccr),
&qe_immr->rsp[i].eccr);
}
qe_firmware_uploaded = 1;
@ -565,11 +566,9 @@ EXPORT_SYMBOL(qe_upload_firmware);
struct qe_firmware_info *qe_get_firmware_info(void)
{
static int initialized;
struct property *prop;
struct device_node *qe;
struct device_node *fw = NULL;
const char *sprop;
unsigned int i;
/*
* If we haven't checked yet, and a driver hasn't uploaded a firmware
@ -603,20 +602,11 @@ struct qe_firmware_info *qe_get_firmware_info(void)
strlcpy(qe_firmware_info.id, sprop,
sizeof(qe_firmware_info.id));
prop = of_find_property(fw, "extended-modes", NULL);
if (prop && (prop->length == sizeof(u64))) {
const u64 *iprop = prop->value;
of_property_read_u64(fw, "extended-modes",
&qe_firmware_info.extended_modes);
qe_firmware_info.extended_modes = *iprop;
}
prop = of_find_property(fw, "virtual-traps", NULL);
if (prop && (prop->length == 32)) {
const u32 *iprop = prop->value;
for (i = 0; i < ARRAY_SIZE(qe_firmware_info.vtraps); i++)
qe_firmware_info.vtraps[i] = iprop[i];
}
of_property_read_u32_array(fw, "virtual-traps", qe_firmware_info.vtraps,
ARRAY_SIZE(qe_firmware_info.vtraps));
of_node_put(fw);
@ -627,17 +617,13 @@ EXPORT_SYMBOL(qe_get_firmware_info);
unsigned int qe_get_num_of_risc(void)
{
struct device_node *qe;
int size;
unsigned int num_of_risc = 0;
const u32 *prop;
qe = qe_get_device_node();
if (!qe)
return num_of_risc;
prop = of_get_property(qe, "fsl,qe-num-riscs", &size);
if (prop && size == sizeof(*prop))
num_of_risc = *prop;
of_property_read_u32(qe, "fsl,qe-num-riscs", &num_of_risc);
of_node_put(qe);

Просмотреть файл

@ -32,7 +32,7 @@ static phys_addr_t muram_pbase;
struct muram_block {
struct list_head head;
unsigned long start;
s32 start;
int size;
};
@ -110,34 +110,30 @@ out_muram:
* @algo: algorithm for alloc.
* @data: data for genalloc's algorithm.
*
* This function returns an offset into the muram area.
* This function returns a non-negative offset into the muram area, or
* a negative errno on failure.
*/
static unsigned long cpm_muram_alloc_common(unsigned long size,
genpool_algo_t algo, void *data)
static s32 cpm_muram_alloc_common(unsigned long size,
genpool_algo_t algo, void *data)
{
struct muram_block *entry;
unsigned long start;
s32 start;
if (!muram_pool && cpm_muram_init())
goto out2;
start = gen_pool_alloc_algo(muram_pool, size, algo, data);
if (!start)
goto out2;
start = start - GENPOOL_OFFSET;
memset_io(cpm_muram_addr(start), 0, size);
entry = kmalloc(sizeof(*entry), GFP_ATOMIC);
if (!entry)
goto out1;
return -ENOMEM;
start = gen_pool_alloc_algo(muram_pool, size, algo, data);
if (!start) {
kfree(entry);
return -ENOMEM;
}
start = start - GENPOOL_OFFSET;
memset_io(cpm_muram_addr(start), 0, size);
entry->start = start;
entry->size = size;
list_add(&entry->head, &muram_block_list);
return start;
out1:
gen_pool_free(muram_pool, start, size);
out2:
return (unsigned long)-ENOMEM;
}
/*
@ -145,13 +141,14 @@ out2:
* @size: number of bytes to allocate
* @align: requested alignment, in bytes
*
* This function returns an offset into the muram area.
* This function returns a non-negative offset into the muram area, or
* a negative errno on failure.
* Use cpm_dpram_addr() to get the virtual address of the area.
* Use cpm_muram_free() to free the allocation.
*/
unsigned long cpm_muram_alloc(unsigned long size, unsigned long align)
s32 cpm_muram_alloc(unsigned long size, unsigned long align)
{
unsigned long start;
s32 start;
unsigned long flags;
struct genpool_data_align muram_pool_data;
@ -168,12 +165,15 @@ EXPORT_SYMBOL(cpm_muram_alloc);
* cpm_muram_free - free a chunk of multi-user ram
* @offset: The beginning of the chunk as returned by cpm_muram_alloc().
*/
int cpm_muram_free(unsigned long offset)
void cpm_muram_free(s32 offset)
{
unsigned long flags;
int size;
struct muram_block *tmp;
if (offset < 0)
return;
size = 0;
spin_lock_irqsave(&cpm_muram_lock, flags);
list_for_each_entry(tmp, &muram_block_list, head) {
@ -186,7 +186,6 @@ int cpm_muram_free(unsigned long offset)
}
gen_pool_free(muram_pool, offset + GENPOOL_OFFSET, size);
spin_unlock_irqrestore(&cpm_muram_lock, flags);
return size;
}
EXPORT_SYMBOL(cpm_muram_free);
@ -194,13 +193,14 @@ EXPORT_SYMBOL(cpm_muram_free);
* cpm_muram_alloc_fixed - reserve a specific region of multi-user ram
* @offset: offset of allocation start address
* @size: number of bytes to allocate
* This function returns an offset into the muram area
* This function returns @offset if the area was available, a negative
* errno otherwise.
* Use cpm_dpram_addr() to get the virtual address of the area.
* Use cpm_muram_free() to free the allocation.
*/
unsigned long cpm_muram_alloc_fixed(unsigned long offset, unsigned long size)
s32 cpm_muram_alloc_fixed(unsigned long offset, unsigned long size)
{
unsigned long start;
s32 start;
unsigned long flags;
struct genpool_data_fixed muram_pool_data_fixed;

Просмотреть файл

@ -15,6 +15,7 @@
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/errno.h>
#include <linux/irq.h>
#include <linux/reboot.h>
#include <linux/slab.h>
#include <linux/stddef.h>
@ -24,9 +25,57 @@
#include <linux/spinlock.h>
#include <asm/irq.h>
#include <asm/io.h>
#include <soc/fsl/qe/qe_ic.h>
#include <soc/fsl/qe/qe.h>
#include "qe_ic.h"
#define NR_QE_IC_INTS 64
/* QE IC registers offset */
#define QEIC_CICR 0x00
#define QEIC_CIVEC 0x04
#define QEIC_CIPXCC 0x10
#define QEIC_CIPYCC 0x14
#define QEIC_CIPWCC 0x18
#define QEIC_CIPZCC 0x1c
#define QEIC_CIMR 0x20
#define QEIC_CRIMR 0x24
#define QEIC_CIPRTA 0x30
#define QEIC_CIPRTB 0x34
#define QEIC_CHIVEC 0x60
struct qe_ic {
/* Control registers offset */
u32 __iomem *regs;
/* The remapper for this QEIC */
struct irq_domain *irqhost;
/* The "linux" controller struct */
struct irq_chip hc_irq;
/* VIRQ numbers of QE high/low irqs */
unsigned int virq_high;
unsigned int virq_low;
};
/*
* QE interrupt controller internal structure
*/
struct qe_ic_info {
/* Location of this source at the QIMR register */
u32 mask;
/* Mask register offset */
u32 mask_reg;
/*
* For grouped interrupts sources - the interrupt code as
* appears at the group priority register
*/
u8 pri_code;
/* Group priority register offset */
u32 pri_reg;
};
static DEFINE_RAW_SPINLOCK(qe_ic_lock);
@ -171,15 +220,15 @@ static struct qe_ic_info qe_ic_info[] = {
},
};
static inline u32 qe_ic_read(volatile __be32 __iomem * base, unsigned int reg)
static inline u32 qe_ic_read(__be32 __iomem *base, unsigned int reg)
{
return in_be32(base + (reg >> 2));
return qe_ioread32be(base + (reg >> 2));
}
static inline void qe_ic_write(volatile __be32 __iomem * base, unsigned int reg,
static inline void qe_ic_write(__be32 __iomem *base, unsigned int reg,
u32 value)
{
out_be32(base + (reg >> 2), value);
qe_iowrite32be(value, base + (reg >> 2));
}
static inline struct qe_ic *qe_ic_from_irq(unsigned int virq)
@ -281,8 +330,8 @@ static const struct irq_domain_ops qe_ic_host_ops = {
.xlate = irq_domain_xlate_onetwocell,
};
/* Return an interrupt vector or NO_IRQ if no interrupt is pending. */
unsigned int qe_ic_get_low_irq(struct qe_ic *qe_ic)
/* Return an interrupt vector or 0 if no interrupt is pending. */
static unsigned int qe_ic_get_low_irq(struct qe_ic *qe_ic)
{
int irq;
@ -292,13 +341,13 @@ unsigned int qe_ic_get_low_irq(struct qe_ic *qe_ic)
irq = qe_ic_read(qe_ic->regs, QEIC_CIVEC) >> 26;
if (irq == 0)
return NO_IRQ;
return 0;
return irq_linear_revmap(qe_ic->irqhost, irq);
}
/* Return an interrupt vector or NO_IRQ if no interrupt is pending. */
unsigned int qe_ic_get_high_irq(struct qe_ic *qe_ic)
/* Return an interrupt vector or 0 if no interrupt is pending. */
static unsigned int qe_ic_get_high_irq(struct qe_ic *qe_ic)
{
int irq;
@ -308,18 +357,60 @@ unsigned int qe_ic_get_high_irq(struct qe_ic *qe_ic)
irq = qe_ic_read(qe_ic->regs, QEIC_CHIVEC) >> 26;
if (irq == 0)
return NO_IRQ;
return 0;
return irq_linear_revmap(qe_ic->irqhost, irq);
}
void __init qe_ic_init(struct device_node *node, unsigned int flags,
void (*low_handler)(struct irq_desc *desc),
void (*high_handler)(struct irq_desc *desc))
static void qe_ic_cascade_low(struct irq_desc *desc)
{
struct qe_ic *qe_ic = irq_desc_get_handler_data(desc);
unsigned int cascade_irq = qe_ic_get_low_irq(qe_ic);
struct irq_chip *chip = irq_desc_get_chip(desc);
if (cascade_irq != 0)
generic_handle_irq(cascade_irq);
if (chip->irq_eoi)
chip->irq_eoi(&desc->irq_data);
}
static void qe_ic_cascade_high(struct irq_desc *desc)
{
struct qe_ic *qe_ic = irq_desc_get_handler_data(desc);
unsigned int cascade_irq = qe_ic_get_high_irq(qe_ic);
struct irq_chip *chip = irq_desc_get_chip(desc);
if (cascade_irq != 0)
generic_handle_irq(cascade_irq);
if (chip->irq_eoi)
chip->irq_eoi(&desc->irq_data);
}
static void qe_ic_cascade_muxed_mpic(struct irq_desc *desc)
{
struct qe_ic *qe_ic = irq_desc_get_handler_data(desc);
unsigned int cascade_irq;
struct irq_chip *chip = irq_desc_get_chip(desc);
cascade_irq = qe_ic_get_high_irq(qe_ic);
if (cascade_irq == 0)
cascade_irq = qe_ic_get_low_irq(qe_ic);
if (cascade_irq != 0)
generic_handle_irq(cascade_irq);
chip->irq_eoi(&desc->irq_data);
}
static void __init qe_ic_init(struct device_node *node)
{
void (*low_handler)(struct irq_desc *desc);
void (*high_handler)(struct irq_desc *desc);
struct qe_ic *qe_ic;
struct resource res;
u32 temp = 0, ret, high_active = 0;
u32 ret;
ret = of_address_to_resource(node, 0, &res);
if (ret)
@ -343,166 +434,42 @@ void __init qe_ic_init(struct device_node *node, unsigned int flags,
qe_ic->virq_high = irq_of_parse_and_map(node, 0);
qe_ic->virq_low = irq_of_parse_and_map(node, 1);
if (qe_ic->virq_low == NO_IRQ) {
if (!qe_ic->virq_low) {
printk(KERN_ERR "Failed to map QE_IC low IRQ\n");
kfree(qe_ic);
return;
}
/* default priority scheme is grouped. If spread mode is */
/* required, configure cicr accordingly. */
if (flags & QE_IC_SPREADMODE_GRP_W)
temp |= CICR_GWCC;
if (flags & QE_IC_SPREADMODE_GRP_X)
temp |= CICR_GXCC;
if (flags & QE_IC_SPREADMODE_GRP_Y)
temp |= CICR_GYCC;
if (flags & QE_IC_SPREADMODE_GRP_Z)
temp |= CICR_GZCC;
if (flags & QE_IC_SPREADMODE_GRP_RISCA)
temp |= CICR_GRTA;
if (flags & QE_IC_SPREADMODE_GRP_RISCB)
temp |= CICR_GRTB;
/* choose destination signal for highest priority interrupt */
if (flags & QE_IC_HIGH_SIGNAL) {
temp |= (SIGNAL_HIGH << CICR_HPIT_SHIFT);
high_active = 1;
if (qe_ic->virq_high != qe_ic->virq_low) {
low_handler = qe_ic_cascade_low;
high_handler = qe_ic_cascade_high;
} else {
low_handler = qe_ic_cascade_muxed_mpic;
high_handler = NULL;
}
qe_ic_write(qe_ic->regs, QEIC_CICR, temp);
qe_ic_write(qe_ic->regs, QEIC_CICR, 0);
irq_set_handler_data(qe_ic->virq_low, qe_ic);
irq_set_chained_handler(qe_ic->virq_low, low_handler);
if (qe_ic->virq_high != NO_IRQ &&
qe_ic->virq_high != qe_ic->virq_low) {
if (qe_ic->virq_high && qe_ic->virq_high != qe_ic->virq_low) {
irq_set_handler_data(qe_ic->virq_high, qe_ic);
irq_set_chained_handler(qe_ic->virq_high, high_handler);
}
}
void qe_ic_set_highest_priority(unsigned int virq, int high)
static int __init qe_ic_of_init(void)
{
struct qe_ic *qe_ic = qe_ic_from_irq(virq);
unsigned int src = virq_to_hw(virq);
u32 temp = 0;
struct device_node *np;
temp = qe_ic_read(qe_ic->regs, QEIC_CICR);
temp &= ~CICR_HP_MASK;
temp |= src << CICR_HP_SHIFT;
temp &= ~CICR_HPIT_MASK;
temp |= (high ? SIGNAL_HIGH : SIGNAL_LOW) << CICR_HPIT_SHIFT;
qe_ic_write(qe_ic->regs, QEIC_CICR, temp);
}
/* Set Priority level within its group, from 1 to 8 */
int qe_ic_set_priority(unsigned int virq, unsigned int priority)
{
struct qe_ic *qe_ic = qe_ic_from_irq(virq);
unsigned int src = virq_to_hw(virq);
u32 temp;
if (priority > 8 || priority == 0)
return -EINVAL;
if (WARN_ONCE(src >= ARRAY_SIZE(qe_ic_info),
"%s: Invalid hw irq number for QEIC\n", __func__))
return -EINVAL;
if (qe_ic_info[src].pri_reg == 0)
return -EINVAL;
temp = qe_ic_read(qe_ic->regs, qe_ic_info[src].pri_reg);
if (priority < 4) {
temp &= ~(0x7 << (32 - priority * 3));
temp |= qe_ic_info[src].pri_code << (32 - priority * 3);
} else {
temp &= ~(0x7 << (24 - priority * 3));
temp |= qe_ic_info[src].pri_code << (24 - priority * 3);
np = of_find_compatible_node(NULL, NULL, "fsl,qe-ic");
if (!np) {
np = of_find_node_by_type(NULL, "qeic");
if (!np)
return -ENODEV;
}
qe_ic_write(qe_ic->regs, qe_ic_info[src].pri_reg, temp);
qe_ic_init(np);
of_node_put(np);
return 0;
}
/* Set a QE priority to use high irq, only priority 1~2 can use high irq */
int qe_ic_set_high_priority(unsigned int virq, unsigned int priority, int high)
{
struct qe_ic *qe_ic = qe_ic_from_irq(virq);
unsigned int src = virq_to_hw(virq);
u32 temp, control_reg = QEIC_CICNR, shift = 0;
if (priority > 2 || priority == 0)
return -EINVAL;
if (WARN_ONCE(src >= ARRAY_SIZE(qe_ic_info),
"%s: Invalid hw irq number for QEIC\n", __func__))
return -EINVAL;
switch (qe_ic_info[src].pri_reg) {
case QEIC_CIPZCC:
shift = CICNR_ZCC1T_SHIFT;
break;
case QEIC_CIPWCC:
shift = CICNR_WCC1T_SHIFT;
break;
case QEIC_CIPYCC:
shift = CICNR_YCC1T_SHIFT;
break;
case QEIC_CIPXCC:
shift = CICNR_XCC1T_SHIFT;
break;
case QEIC_CIPRTA:
shift = CRICR_RTA1T_SHIFT;
control_reg = QEIC_CRICR;
break;
case QEIC_CIPRTB:
shift = CRICR_RTB1T_SHIFT;
control_reg = QEIC_CRICR;
break;
default:
return -EINVAL;
}
shift += (2 - priority) * 2;
temp = qe_ic_read(qe_ic->regs, control_reg);
temp &= ~(SIGNAL_MASK << shift);
temp |= (high ? SIGNAL_HIGH : SIGNAL_LOW) << shift;
qe_ic_write(qe_ic->regs, control_reg, temp);
return 0;
}
static struct bus_type qe_ic_subsys = {
.name = "qe_ic",
.dev_name = "qe_ic",
};
static struct device device_qe_ic = {
.id = 0,
.bus = &qe_ic_subsys,
};
static int __init init_qe_ic_sysfs(void)
{
int rc;
printk(KERN_DEBUG "Registering qe_ic with sysfs...\n");
rc = subsys_system_register(&qe_ic_subsys, NULL);
if (rc) {
printk(KERN_ERR "Failed registering qe_ic sys class\n");
return -ENODEV;
}
rc = device_register(&device_qe_ic);
if (rc) {
printk(KERN_ERR "Failed registering qe_ic sys device\n");
return -ENODEV;
}
return 0;
}
subsys_initcall(init_qe_ic_sysfs);
subsys_initcall(qe_ic_of_init);

Просмотреть файл

@ -1,99 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* drivers/soc/fsl/qe/qe_ic.h
*
* QUICC ENGINE Interrupt Controller Header
*
* Copyright (C) 2006 Freescale Semiconductor, Inc. All rights reserved.
*
* Author: Li Yang <leoli@freescale.com>
* Based on code from Shlomi Gridish <gridish@freescale.com>
*/
#ifndef _POWERPC_SYSDEV_QE_IC_H
#define _POWERPC_SYSDEV_QE_IC_H
#include <soc/fsl/qe/qe_ic.h>
#define NR_QE_IC_INTS 64
/* QE IC registers offset */
#define QEIC_CICR 0x00
#define QEIC_CIVEC 0x04
#define QEIC_CRIPNR 0x08
#define QEIC_CIPNR 0x0c
#define QEIC_CIPXCC 0x10
#define QEIC_CIPYCC 0x14
#define QEIC_CIPWCC 0x18
#define QEIC_CIPZCC 0x1c
#define QEIC_CIMR 0x20
#define QEIC_CRIMR 0x24
#define QEIC_CICNR 0x28
#define QEIC_CIPRTA 0x30
#define QEIC_CIPRTB 0x34
#define QEIC_CRICR 0x3c
#define QEIC_CHIVEC 0x60
/* Interrupt priority registers */
#define CIPCC_SHIFT_PRI0 29
#define CIPCC_SHIFT_PRI1 26
#define CIPCC_SHIFT_PRI2 23
#define CIPCC_SHIFT_PRI3 20
#define CIPCC_SHIFT_PRI4 13
#define CIPCC_SHIFT_PRI5 10
#define CIPCC_SHIFT_PRI6 7
#define CIPCC_SHIFT_PRI7 4
/* CICR priority modes */
#define CICR_GWCC 0x00040000
#define CICR_GXCC 0x00020000
#define CICR_GYCC 0x00010000
#define CICR_GZCC 0x00080000
#define CICR_GRTA 0x00200000
#define CICR_GRTB 0x00400000
#define CICR_HPIT_SHIFT 8
#define CICR_HPIT_MASK 0x00000300
#define CICR_HP_SHIFT 24
#define CICR_HP_MASK 0x3f000000
/* CICNR */
#define CICNR_WCC1T_SHIFT 20
#define CICNR_ZCC1T_SHIFT 28
#define CICNR_YCC1T_SHIFT 12
#define CICNR_XCC1T_SHIFT 4
/* CRICR */
#define CRICR_RTA1T_SHIFT 20
#define CRICR_RTB1T_SHIFT 28
/* Signal indicator */
#define SIGNAL_MASK 3
#define SIGNAL_HIGH 2
#define SIGNAL_LOW 0
struct qe_ic {
/* Control registers offset */
volatile u32 __iomem *regs;
/* The remapper for this QEIC */
struct irq_domain *irqhost;
/* The "linux" controller struct */
struct irq_chip hc_irq;
/* VIRQ numbers of QE high/low irqs */
unsigned int virq_high;
unsigned int virq_low;
};
/*
* QE interrupt controller internal structure
*/
struct qe_ic_info {
u32 mask; /* location of this source at the QIMR register. */
u32 mask_reg; /* Mask register offset */
u8 pri_code; /* for grouped interrupts sources - the interrupt
code as appears at the group priority register */
u32 pri_reg; /* Group priority register offset */
};
#endif /* _POWERPC_SYSDEV_QE_IC_H */

Просмотреть файл

@ -18,8 +18,6 @@
#include <asm/io.h>
#include <soc/fsl/qe/qe.h>
#include <asm/prom.h>
#include <sysdev/fsl_soc.h>
#undef DEBUG
@ -30,7 +28,7 @@ int par_io_init(struct device_node *np)
{
struct resource res;
int ret;
const u32 *num_ports;
u32 num_ports;
/* Map Parallel I/O ports registers */
ret = of_address_to_resource(np, 0, &res);
@ -38,9 +36,8 @@ int par_io_init(struct device_node *np)
return ret;
par_io = ioremap(res.start, resource_size(&res));
num_ports = of_get_property(np, "num-ports", NULL);
if (num_ports)
num_par_io_ports = *num_ports;
if (!of_property_read_u32(np, "num-ports", &num_ports))
num_par_io_ports = num_ports;
return 0;
}
@ -57,16 +54,16 @@ void __par_io_config_pin(struct qe_pio_regs __iomem *par_io, u8 pin, int dir,
pin_mask1bit = (u32) (1 << (QE_PIO_PINS - (pin + 1)));
/* Set open drain, if required */
tmp_val = in_be32(&par_io->cpodr);
tmp_val = qe_ioread32be(&par_io->cpodr);
if (open_drain)
out_be32(&par_io->cpodr, pin_mask1bit | tmp_val);
qe_iowrite32be(pin_mask1bit | tmp_val, &par_io->cpodr);
else
out_be32(&par_io->cpodr, ~pin_mask1bit & tmp_val);
qe_iowrite32be(~pin_mask1bit & tmp_val, &par_io->cpodr);
/* define direction */
tmp_val = (pin > (QE_PIO_PINS / 2) - 1) ?
in_be32(&par_io->cpdir2) :
in_be32(&par_io->cpdir1);
qe_ioread32be(&par_io->cpdir2) :
qe_ioread32be(&par_io->cpdir1);
/* get all bits mask for 2 bit per port */
pin_mask2bits = (u32) (0x3 << (QE_PIO_PINS -
@ -78,34 +75,30 @@ void __par_io_config_pin(struct qe_pio_regs __iomem *par_io, u8 pin, int dir,
/* clear and set 2 bits mask */
if (pin > (QE_PIO_PINS / 2) - 1) {
out_be32(&par_io->cpdir2,
~pin_mask2bits & tmp_val);
qe_iowrite32be(~pin_mask2bits & tmp_val, &par_io->cpdir2);
tmp_val &= ~pin_mask2bits;
out_be32(&par_io->cpdir2, new_mask2bits | tmp_val);
qe_iowrite32be(new_mask2bits | tmp_val, &par_io->cpdir2);
} else {
out_be32(&par_io->cpdir1,
~pin_mask2bits & tmp_val);
qe_iowrite32be(~pin_mask2bits & tmp_val, &par_io->cpdir1);
tmp_val &= ~pin_mask2bits;
out_be32(&par_io->cpdir1, new_mask2bits | tmp_val);
qe_iowrite32be(new_mask2bits | tmp_val, &par_io->cpdir1);
}
/* define pin assignment */
tmp_val = (pin > (QE_PIO_PINS / 2) - 1) ?
in_be32(&par_io->cppar2) :
in_be32(&par_io->cppar1);
qe_ioread32be(&par_io->cppar2) :
qe_ioread32be(&par_io->cppar1);
new_mask2bits = (u32) (assignment << (QE_PIO_PINS -
(pin % (QE_PIO_PINS / 2) + 1) * 2));
/* clear and set 2 bits mask */
if (pin > (QE_PIO_PINS / 2) - 1) {
out_be32(&par_io->cppar2,
~pin_mask2bits & tmp_val);
qe_iowrite32be(~pin_mask2bits & tmp_val, &par_io->cppar2);
tmp_val &= ~pin_mask2bits;
out_be32(&par_io->cppar2, new_mask2bits | tmp_val);
qe_iowrite32be(new_mask2bits | tmp_val, &par_io->cppar2);
} else {
out_be32(&par_io->cppar1,
~pin_mask2bits & tmp_val);
qe_iowrite32be(~pin_mask2bits & tmp_val, &par_io->cppar1);
tmp_val &= ~pin_mask2bits;
out_be32(&par_io->cppar1, new_mask2bits | tmp_val);
qe_iowrite32be(new_mask2bits | tmp_val, &par_io->cppar1);
}
}
EXPORT_SYMBOL(__par_io_config_pin);
@ -133,12 +126,12 @@ int par_io_data_set(u8 port, u8 pin, u8 val)
/* calculate pin location */
pin_mask = (u32) (1 << (QE_PIO_PINS - 1 - pin));
tmp_val = in_be32(&par_io[port].cpdata);
tmp_val = qe_ioread32be(&par_io[port].cpdata);
if (val == 0) /* clear */
out_be32(&par_io[port].cpdata, ~pin_mask & tmp_val);
qe_iowrite32be(~pin_mask & tmp_val, &par_io[port].cpdata);
else /* set */
out_be32(&par_io[port].cpdata, pin_mask | tmp_val);
qe_iowrite32be(pin_mask | tmp_val, &par_io[port].cpdata);
return 0;
}
@ -147,23 +140,20 @@ EXPORT_SYMBOL(par_io_data_set);
int par_io_of_config(struct device_node *np)
{
struct device_node *pio;
const phandle *ph;
int pio_map_len;
const unsigned int *pio_map;
const __be32 *pio_map;
if (par_io == NULL) {
printk(KERN_ERR "par_io not initialized\n");
return -1;
}
ph = of_get_property(np, "pio-handle", NULL);
if (ph == NULL) {
pio = of_parse_phandle(np, "pio-handle", 0);
if (pio == NULL) {
printk(KERN_ERR "pio-handle not available\n");
return -1;
}
pio = of_find_node_by_phandle(*ph);
pio_map = of_get_property(pio, "pio-map", &pio_map_len);
if (pio_map == NULL) {
printk(KERN_ERR "pio-map is not set!\n");
@ -176,9 +166,15 @@ int par_io_of_config(struct device_node *np)
}
while (pio_map_len > 0) {
par_io_config_pin((u8) pio_map[0], (u8) pio_map[1],
(int) pio_map[2], (int) pio_map[3],
(int) pio_map[4], (int) pio_map[5]);
u8 port = be32_to_cpu(pio_map[0]);
u8 pin = be32_to_cpu(pio_map[1]);
int dir = be32_to_cpu(pio_map[2]);
int open_drain = be32_to_cpu(pio_map[3]);
int assignment = be32_to_cpu(pio_map[4]);
int has_irq = be32_to_cpu(pio_map[5]);
par_io_config_pin(port, pin, dir, open_drain,
assignment, has_irq);
pio_map += 6;
pio_map_len -= 6;
}

Просмотреть файл

@ -169,10 +169,10 @@ void ucc_tdm_init(struct ucc_tdm *utdm, struct ucc_tdm_info *ut_info)
&siram[siram_entry_id * 32 + 0x200 + i]);
}
setbits16(&siram[(siram_entry_id * 32) + (utdm->num_of_ts - 1)],
SIR_LAST);
setbits16(&siram[(siram_entry_id * 32) + 0x200 + (utdm->num_of_ts - 1)],
SIR_LAST);
qe_setbits_be16(&siram[(siram_entry_id * 32) + (utdm->num_of_ts - 1)],
SIR_LAST);
qe_setbits_be16(&siram[(siram_entry_id * 32) + 0x200 + (utdm->num_of_ts - 1)],
SIR_LAST);
/* Set SIxMR register */
sixmr = SIMR_SAD(siram_entry_id);

Просмотреть файл

@ -15,7 +15,6 @@
#include <linux/spinlock.h>
#include <linux/export.h>
#include <asm/irq.h>
#include <asm/io.h>
#include <soc/fsl/qe/immap_qe.h>
#include <soc/fsl/qe/qe.h>
@ -35,8 +34,8 @@ int ucc_set_qe_mux_mii_mng(unsigned int ucc_num)
return -EINVAL;
spin_lock_irqsave(&cmxgcr_lock, flags);
clrsetbits_be32(&qe_immr->qmx.cmxgcr, QE_CMXGCR_MII_ENET_MNG,
ucc_num << QE_CMXGCR_MII_ENET_MNG_SHIFT);
qe_clrsetbits_be32(&qe_immr->qmx.cmxgcr, QE_CMXGCR_MII_ENET_MNG,
ucc_num << QE_CMXGCR_MII_ENET_MNG_SHIFT);
spin_unlock_irqrestore(&cmxgcr_lock, flags);
return 0;
@ -80,8 +79,8 @@ int ucc_set_type(unsigned int ucc_num, enum ucc_speed_type speed)
return -EINVAL;
}
clrsetbits_8(guemr, UCC_GUEMR_MODE_MASK,
UCC_GUEMR_SET_RESERVED3 | speed);
qe_clrsetbits_8(guemr, UCC_GUEMR_MODE_MASK,
UCC_GUEMR_SET_RESERVED3 | speed);
return 0;
}
@ -109,9 +108,9 @@ int ucc_mux_set_grant_tsa_bkpt(unsigned int ucc_num, int set, u32 mask)
get_cmxucr_reg(ucc_num, &cmxucr, &reg_num, &shift);
if (set)
setbits32(cmxucr, mask << shift);
qe_setbits_be32(cmxucr, mask << shift);
else
clrbits32(cmxucr, mask << shift);
qe_clrbits_be32(cmxucr, mask << shift);
return 0;
}
@ -207,8 +206,8 @@ int ucc_set_qe_mux_rxtx(unsigned int ucc_num, enum qe_clock clock,
if (mode == COMM_DIR_RX)
shift += 4;
clrsetbits_be32(cmxucr, QE_CMXUCR_TX_CLK_SRC_MASK << shift,
clock_bits << shift);
qe_clrsetbits_be32(cmxucr, QE_CMXUCR_TX_CLK_SRC_MASK << shift,
clock_bits << shift);
return 0;
}
@ -540,8 +539,8 @@ int ucc_set_tdm_rxtx_clk(u32 tdm_num, enum qe_clock clock,
cmxs1cr = (tdm_num < 4) ? &qe_mux_reg->cmxsi1cr_l :
&qe_mux_reg->cmxsi1cr_h;
qe_clrsetbits32(cmxs1cr, QE_CMXUCR_TX_CLK_SRC_MASK << shift,
clock_bits << shift);
qe_clrsetbits_be32(cmxs1cr, QE_CMXUCR_TX_CLK_SRC_MASK << shift,
clock_bits << shift);
return 0;
}
@ -650,9 +649,9 @@ int ucc_set_tdm_rxtx_sync(u32 tdm_num, enum qe_clock clock,
shift = ucc_get_tdm_sync_shift(mode, tdm_num);
qe_clrsetbits32(&qe_mux_reg->cmxsi1syr,
QE_CMXUCR_TX_CLK_SRC_MASK << shift,
source << shift);
qe_clrsetbits_be32(&qe_mux_reg->cmxsi1syr,
QE_CMXUCR_TX_CLK_SRC_MASK << shift,
source << shift);
return 0;
}

Просмотреть файл

@ -29,41 +29,42 @@ void ucc_fast_dump_regs(struct ucc_fast_private * uccf)
printk(KERN_INFO "Base address: 0x%p\n", uccf->uf_regs);
printk(KERN_INFO "gumr : addr=0x%p, val=0x%08x\n",
&uccf->uf_regs->gumr, in_be32(&uccf->uf_regs->gumr));
&uccf->uf_regs->gumr, qe_ioread32be(&uccf->uf_regs->gumr));
printk(KERN_INFO "upsmr : addr=0x%p, val=0x%08x\n",
&uccf->uf_regs->upsmr, in_be32(&uccf->uf_regs->upsmr));
&uccf->uf_regs->upsmr, qe_ioread32be(&uccf->uf_regs->upsmr));
printk(KERN_INFO "utodr : addr=0x%p, val=0x%04x\n",
&uccf->uf_regs->utodr, in_be16(&uccf->uf_regs->utodr));
&uccf->uf_regs->utodr, qe_ioread16be(&uccf->uf_regs->utodr));
printk(KERN_INFO "udsr : addr=0x%p, val=0x%04x\n",
&uccf->uf_regs->udsr, in_be16(&uccf->uf_regs->udsr));
&uccf->uf_regs->udsr, qe_ioread16be(&uccf->uf_regs->udsr));
printk(KERN_INFO "ucce : addr=0x%p, val=0x%08x\n",
&uccf->uf_regs->ucce, in_be32(&uccf->uf_regs->ucce));
&uccf->uf_regs->ucce, qe_ioread32be(&uccf->uf_regs->ucce));
printk(KERN_INFO "uccm : addr=0x%p, val=0x%08x\n",
&uccf->uf_regs->uccm, in_be32(&uccf->uf_regs->uccm));
&uccf->uf_regs->uccm, qe_ioread32be(&uccf->uf_regs->uccm));
printk(KERN_INFO "uccs : addr=0x%p, val=0x%02x\n",
&uccf->uf_regs->uccs, in_8(&uccf->uf_regs->uccs));
&uccf->uf_regs->uccs, qe_ioread8(&uccf->uf_regs->uccs));
printk(KERN_INFO "urfb : addr=0x%p, val=0x%08x\n",
&uccf->uf_regs->urfb, in_be32(&uccf->uf_regs->urfb));
&uccf->uf_regs->urfb, qe_ioread32be(&uccf->uf_regs->urfb));
printk(KERN_INFO "urfs : addr=0x%p, val=0x%04x\n",
&uccf->uf_regs->urfs, in_be16(&uccf->uf_regs->urfs));
&uccf->uf_regs->urfs, qe_ioread16be(&uccf->uf_regs->urfs));
printk(KERN_INFO "urfet : addr=0x%p, val=0x%04x\n",
&uccf->uf_regs->urfet, in_be16(&uccf->uf_regs->urfet));
&uccf->uf_regs->urfet, qe_ioread16be(&uccf->uf_regs->urfet));
printk(KERN_INFO "urfset: addr=0x%p, val=0x%04x\n",
&uccf->uf_regs->urfset, in_be16(&uccf->uf_regs->urfset));
&uccf->uf_regs->urfset,
qe_ioread16be(&uccf->uf_regs->urfset));
printk(KERN_INFO "utfb : addr=0x%p, val=0x%08x\n",
&uccf->uf_regs->utfb, in_be32(&uccf->uf_regs->utfb));
&uccf->uf_regs->utfb, qe_ioread32be(&uccf->uf_regs->utfb));
printk(KERN_INFO "utfs : addr=0x%p, val=0x%04x\n",
&uccf->uf_regs->utfs, in_be16(&uccf->uf_regs->utfs));
&uccf->uf_regs->utfs, qe_ioread16be(&uccf->uf_regs->utfs));
printk(KERN_INFO "utfet : addr=0x%p, val=0x%04x\n",
&uccf->uf_regs->utfet, in_be16(&uccf->uf_regs->utfet));
&uccf->uf_regs->utfet, qe_ioread16be(&uccf->uf_regs->utfet));
printk(KERN_INFO "utftt : addr=0x%p, val=0x%04x\n",
&uccf->uf_regs->utftt, in_be16(&uccf->uf_regs->utftt));
&uccf->uf_regs->utftt, qe_ioread16be(&uccf->uf_regs->utftt));
printk(KERN_INFO "utpt : addr=0x%p, val=0x%04x\n",
&uccf->uf_regs->utpt, in_be16(&uccf->uf_regs->utpt));
&uccf->uf_regs->utpt, qe_ioread16be(&uccf->uf_regs->utpt));
printk(KERN_INFO "urtry : addr=0x%p, val=0x%08x\n",
&uccf->uf_regs->urtry, in_be32(&uccf->uf_regs->urtry));
&uccf->uf_regs->urtry, qe_ioread32be(&uccf->uf_regs->urtry));
printk(KERN_INFO "guemr : addr=0x%p, val=0x%02x\n",
&uccf->uf_regs->guemr, in_8(&uccf->uf_regs->guemr));
&uccf->uf_regs->guemr, qe_ioread8(&uccf->uf_regs->guemr));
}
EXPORT_SYMBOL(ucc_fast_dump_regs);
@ -85,7 +86,7 @@ EXPORT_SYMBOL(ucc_fast_get_qe_cr_subblock);
void ucc_fast_transmit_on_demand(struct ucc_fast_private * uccf)
{
out_be16(&uccf->uf_regs->utodr, UCC_FAST_TOD);
qe_iowrite16be(UCC_FAST_TOD, &uccf->uf_regs->utodr);
}
EXPORT_SYMBOL(ucc_fast_transmit_on_demand);
@ -97,7 +98,7 @@ void ucc_fast_enable(struct ucc_fast_private * uccf, enum comm_dir mode)
uf_regs = uccf->uf_regs;
/* Enable reception and/or transmission on this UCC. */
gumr = in_be32(&uf_regs->gumr);
gumr = qe_ioread32be(&uf_regs->gumr);
if (mode & COMM_DIR_TX) {
gumr |= UCC_FAST_GUMR_ENT;
uccf->enabled_tx = 1;
@ -106,7 +107,7 @@ void ucc_fast_enable(struct ucc_fast_private * uccf, enum comm_dir mode)
gumr |= UCC_FAST_GUMR_ENR;
uccf->enabled_rx = 1;
}
out_be32(&uf_regs->gumr, gumr);
qe_iowrite32be(gumr, &uf_regs->gumr);
}
EXPORT_SYMBOL(ucc_fast_enable);
@ -118,7 +119,7 @@ void ucc_fast_disable(struct ucc_fast_private * uccf, enum comm_dir mode)
uf_regs = uccf->uf_regs;
/* Disable reception and/or transmission on this UCC. */
gumr = in_be32(&uf_regs->gumr);
gumr = qe_ioread32be(&uf_regs->gumr);
if (mode & COMM_DIR_TX) {
gumr &= ~UCC_FAST_GUMR_ENT;
uccf->enabled_tx = 0;
@ -127,7 +128,7 @@ void ucc_fast_disable(struct ucc_fast_private * uccf, enum comm_dir mode)
gumr &= ~UCC_FAST_GUMR_ENR;
uccf->enabled_rx = 0;
}
out_be32(&uf_regs->gumr, gumr);
qe_iowrite32be(gumr, &uf_regs->gumr);
}
EXPORT_SYMBOL(ucc_fast_disable);
@ -196,6 +197,8 @@ int ucc_fast_init(struct ucc_fast_info * uf_info, struct ucc_fast_private ** ucc
__func__);
return -ENOMEM;
}
uccf->ucc_fast_tx_virtual_fifo_base_offset = -1;
uccf->ucc_fast_rx_virtual_fifo_base_offset = -1;
/* Fill fast UCC structure */
uccf->uf_info = uf_info;
@ -259,15 +262,14 @@ int ucc_fast_init(struct ucc_fast_info * uf_info, struct ucc_fast_private ** ucc
gumr |= uf_info->tenc;
gumr |= uf_info->tcrc;
gumr |= uf_info->mode;
out_be32(&uf_regs->gumr, gumr);
qe_iowrite32be(gumr, &uf_regs->gumr);
/* Allocate memory for Tx Virtual Fifo */
uccf->ucc_fast_tx_virtual_fifo_base_offset =
qe_muram_alloc(uf_info->utfs, UCC_FAST_VIRT_FIFO_REGS_ALIGNMENT);
if (IS_ERR_VALUE(uccf->ucc_fast_tx_virtual_fifo_base_offset)) {
if (uccf->ucc_fast_tx_virtual_fifo_base_offset < 0) {
printk(KERN_ERR "%s: cannot allocate MURAM for TX FIFO\n",
__func__);
uccf->ucc_fast_tx_virtual_fifo_base_offset = 0;
ucc_fast_free(uccf);
return -ENOMEM;
}
@ -277,24 +279,25 @@ int ucc_fast_init(struct ucc_fast_info * uf_info, struct ucc_fast_private ** ucc
qe_muram_alloc(uf_info->urfs +
UCC_FAST_RECEIVE_VIRTUAL_FIFO_SIZE_FUDGE_FACTOR,
UCC_FAST_VIRT_FIFO_REGS_ALIGNMENT);
if (IS_ERR_VALUE(uccf->ucc_fast_rx_virtual_fifo_base_offset)) {
if (uccf->ucc_fast_rx_virtual_fifo_base_offset < 0) {
printk(KERN_ERR "%s: cannot allocate MURAM for RX FIFO\n",
__func__);
uccf->ucc_fast_rx_virtual_fifo_base_offset = 0;
ucc_fast_free(uccf);
return -ENOMEM;
}
/* Set Virtual Fifo registers */
out_be16(&uf_regs->urfs, uf_info->urfs);
out_be16(&uf_regs->urfet, uf_info->urfet);
out_be16(&uf_regs->urfset, uf_info->urfset);
out_be16(&uf_regs->utfs, uf_info->utfs);
out_be16(&uf_regs->utfet, uf_info->utfet);
out_be16(&uf_regs->utftt, uf_info->utftt);
qe_iowrite16be(uf_info->urfs, &uf_regs->urfs);
qe_iowrite16be(uf_info->urfet, &uf_regs->urfet);
qe_iowrite16be(uf_info->urfset, &uf_regs->urfset);
qe_iowrite16be(uf_info->utfs, &uf_regs->utfs);
qe_iowrite16be(uf_info->utfet, &uf_regs->utfet);
qe_iowrite16be(uf_info->utftt, &uf_regs->utftt);
/* utfb, urfb are offsets from MURAM base */
out_be32(&uf_regs->utfb, uccf->ucc_fast_tx_virtual_fifo_base_offset);
out_be32(&uf_regs->urfb, uccf->ucc_fast_rx_virtual_fifo_base_offset);
qe_iowrite32be(uccf->ucc_fast_tx_virtual_fifo_base_offset,
&uf_regs->utfb);
qe_iowrite32be(uccf->ucc_fast_rx_virtual_fifo_base_offset,
&uf_regs->urfb);
/* Mux clocking */
/* Grant Support */
@ -362,14 +365,14 @@ int ucc_fast_init(struct ucc_fast_info * uf_info, struct ucc_fast_private ** ucc
}
/* Set interrupt mask register at UCC level. */
out_be32(&uf_regs->uccm, uf_info->uccm_mask);
qe_iowrite32be(uf_info->uccm_mask, &uf_regs->uccm);
/* First, clear anything pending at UCC level,
* otherwise, old garbage may come through
* as soon as the dam is opened. */
/* Writing '1' clears */
out_be32(&uf_regs->ucce, 0xffffffff);
qe_iowrite32be(0xffffffff, &uf_regs->ucce);
*uccf_ret = uccf;
return 0;
@ -381,11 +384,8 @@ void ucc_fast_free(struct ucc_fast_private * uccf)
if (!uccf)
return;
if (uccf->ucc_fast_tx_virtual_fifo_base_offset)
qe_muram_free(uccf->ucc_fast_tx_virtual_fifo_base_offset);
if (uccf->ucc_fast_rx_virtual_fifo_base_offset)
qe_muram_free(uccf->ucc_fast_rx_virtual_fifo_base_offset);
qe_muram_free(uccf->ucc_fast_tx_virtual_fifo_base_offset);
qe_muram_free(uccf->ucc_fast_rx_virtual_fifo_base_offset);
if (uccf->uf_regs)
iounmap(uccf->uf_regs);

Просмотреть файл

@ -78,7 +78,7 @@ void ucc_slow_enable(struct ucc_slow_private * uccs, enum comm_dir mode)
us_regs = uccs->us_regs;
/* Enable reception and/or transmission on this UCC. */
gumr_l = in_be32(&us_regs->gumr_l);
gumr_l = qe_ioread32be(&us_regs->gumr_l);
if (mode & COMM_DIR_TX) {
gumr_l |= UCC_SLOW_GUMR_L_ENT;
uccs->enabled_tx = 1;
@ -87,7 +87,7 @@ void ucc_slow_enable(struct ucc_slow_private * uccs, enum comm_dir mode)
gumr_l |= UCC_SLOW_GUMR_L_ENR;
uccs->enabled_rx = 1;
}
out_be32(&us_regs->gumr_l, gumr_l);
qe_iowrite32be(gumr_l, &us_regs->gumr_l);
}
EXPORT_SYMBOL(ucc_slow_enable);
@ -99,7 +99,7 @@ void ucc_slow_disable(struct ucc_slow_private * uccs, enum comm_dir mode)
us_regs = uccs->us_regs;
/* Disable reception and/or transmission on this UCC. */
gumr_l = in_be32(&us_regs->gumr_l);
gumr_l = qe_ioread32be(&us_regs->gumr_l);
if (mode & COMM_DIR_TX) {
gumr_l &= ~UCC_SLOW_GUMR_L_ENT;
uccs->enabled_tx = 0;
@ -108,7 +108,7 @@ void ucc_slow_disable(struct ucc_slow_private * uccs, enum comm_dir mode)
gumr_l &= ~UCC_SLOW_GUMR_L_ENR;
uccs->enabled_rx = 0;
}
out_be32(&us_regs->gumr_l, gumr_l);
qe_iowrite32be(gumr_l, &us_regs->gumr_l);
}
EXPORT_SYMBOL(ucc_slow_disable);
@ -154,6 +154,9 @@ int ucc_slow_init(struct ucc_slow_info * us_info, struct ucc_slow_private ** ucc
__func__);
return -ENOMEM;
}
uccs->rx_base_offset = -1;
uccs->tx_base_offset = -1;
uccs->us_pram_offset = -1;
/* Fill slow UCC structure */
uccs->us_info = us_info;
@ -179,7 +182,7 @@ int ucc_slow_init(struct ucc_slow_info * us_info, struct ucc_slow_private ** ucc
/* Get PRAM base */
uccs->us_pram_offset =
qe_muram_alloc(UCC_SLOW_PRAM_SIZE, ALIGNMENT_OF_UCC_SLOW_PRAM);
if (IS_ERR_VALUE(uccs->us_pram_offset)) {
if (uccs->us_pram_offset < 0) {
printk(KERN_ERR "%s: cannot allocate MURAM for PRAM", __func__);
ucc_slow_free(uccs);
return -ENOMEM;
@ -198,7 +201,7 @@ int ucc_slow_init(struct ucc_slow_info * us_info, struct ucc_slow_private ** ucc
return ret;
}
out_be16(&uccs->us_pram->mrblr, us_info->max_rx_buf_length);
qe_iowrite16be(us_info->max_rx_buf_length, &uccs->us_pram->mrblr);
INIT_LIST_HEAD(&uccs->confQ);
@ -206,10 +209,9 @@ int ucc_slow_init(struct ucc_slow_info * us_info, struct ucc_slow_private ** ucc
uccs->rx_base_offset =
qe_muram_alloc(us_info->rx_bd_ring_len * sizeof(struct qe_bd),
QE_ALIGNMENT_OF_BD);
if (IS_ERR_VALUE(uccs->rx_base_offset)) {
if (uccs->rx_base_offset < 0) {
printk(KERN_ERR "%s: cannot allocate %u RX BDs\n", __func__,
us_info->rx_bd_ring_len);
uccs->rx_base_offset = 0;
ucc_slow_free(uccs);
return -ENOMEM;
}
@ -217,9 +219,8 @@ int ucc_slow_init(struct ucc_slow_info * us_info, struct ucc_slow_private ** ucc
uccs->tx_base_offset =
qe_muram_alloc(us_info->tx_bd_ring_len * sizeof(struct qe_bd),
QE_ALIGNMENT_OF_BD);
if (IS_ERR_VALUE(uccs->tx_base_offset)) {
if (uccs->tx_base_offset < 0) {
printk(KERN_ERR "%s: cannot allocate TX BDs", __func__);
uccs->tx_base_offset = 0;
ucc_slow_free(uccs);
return -ENOMEM;
}
@ -228,27 +229,27 @@ int ucc_slow_init(struct ucc_slow_info * us_info, struct ucc_slow_private ** ucc
bd = uccs->confBd = uccs->tx_bd = qe_muram_addr(uccs->tx_base_offset);
for (i = 0; i < us_info->tx_bd_ring_len - 1; i++) {
/* clear bd buffer */
out_be32(&bd->buf, 0);
qe_iowrite32be(0, &bd->buf);
/* set bd status and length */
out_be32((u32 *) bd, 0);
qe_iowrite32be(0, (u32 *)bd);
bd++;
}
/* for last BD set Wrap bit */
out_be32(&bd->buf, 0);
out_be32((u32 *) bd, cpu_to_be32(T_W));
qe_iowrite32be(0, &bd->buf);
qe_iowrite32be(cpu_to_be32(T_W), (u32 *)bd);
/* Init Rx bds */
bd = uccs->rx_bd = qe_muram_addr(uccs->rx_base_offset);
for (i = 0; i < us_info->rx_bd_ring_len - 1; i++) {
/* set bd status and length */
out_be32((u32*)bd, 0);
qe_iowrite32be(0, (u32 *)bd);
/* clear bd buffer */
out_be32(&bd->buf, 0);
qe_iowrite32be(0, &bd->buf);
bd++;
}
/* for last BD set Wrap bit */
out_be32((u32*)bd, cpu_to_be32(R_W));
out_be32(&bd->buf, 0);
qe_iowrite32be(cpu_to_be32(R_W), (u32 *)bd);
qe_iowrite32be(0, &bd->buf);
/* Set GUMR (For more details see the hardware spec.). */
/* gumr_h */
@ -269,7 +270,7 @@ int ucc_slow_init(struct ucc_slow_info * us_info, struct ucc_slow_private ** ucc
gumr |= UCC_SLOW_GUMR_H_TXSY;
if (us_info->rtsm)
gumr |= UCC_SLOW_GUMR_H_RTSM;
out_be32(&us_regs->gumr_h, gumr);
qe_iowrite32be(gumr, &us_regs->gumr_h);
/* gumr_l */
gumr = us_info->tdcr | us_info->rdcr | us_info->tenc | us_info->renc |
@ -282,7 +283,7 @@ int ucc_slow_init(struct ucc_slow_info * us_info, struct ucc_slow_private ** ucc
gumr |= UCC_SLOW_GUMR_L_TINV;
if (us_info->tend)
gumr |= UCC_SLOW_GUMR_L_TEND;
out_be32(&us_regs->gumr_l, gumr);
qe_iowrite32be(gumr, &us_regs->gumr_l);
/* Function code registers */
@ -292,8 +293,8 @@ int ucc_slow_init(struct ucc_slow_info * us_info, struct ucc_slow_private ** ucc
uccs->us_pram->rbmr = UCC_BMR_BO_BE;
/* rbase, tbase are offsets from MURAM base */
out_be16(&uccs->us_pram->rbase, uccs->rx_base_offset);
out_be16(&uccs->us_pram->tbase, uccs->tx_base_offset);
qe_iowrite16be(uccs->rx_base_offset, &uccs->us_pram->rbase);
qe_iowrite16be(uccs->tx_base_offset, &uccs->us_pram->tbase);
/* Mux clocking */
/* Grant Support */
@ -323,14 +324,14 @@ int ucc_slow_init(struct ucc_slow_info * us_info, struct ucc_slow_private ** ucc
}
/* Set interrupt mask register at UCC level. */
out_be16(&us_regs->uccm, us_info->uccm_mask);
qe_iowrite16be(us_info->uccm_mask, &us_regs->uccm);
/* First, clear anything pending at UCC level,
* otherwise, old garbage may come through
* as soon as the dam is opened. */
/* Writing '1' clears */
out_be16(&us_regs->ucce, 0xffff);
qe_iowrite16be(0xffff, &us_regs->ucce);
/* Issue QE Init command */
if (us_info->init_tx && us_info->init_rx)
@ -352,14 +353,9 @@ void ucc_slow_free(struct ucc_slow_private * uccs)
if (!uccs)
return;
if (uccs->rx_base_offset)
qe_muram_free(uccs->rx_base_offset);
if (uccs->tx_base_offset)
qe_muram_free(uccs->tx_base_offset);
if (uccs->us_pram)
qe_muram_free(uccs->us_pram_offset);
qe_muram_free(uccs->rx_base_offset);
qe_muram_free(uccs->tx_base_offset);
qe_muram_free(uccs->us_pram_offset);
if (uccs->us_regs)
iounmap(uccs->us_regs);

Просмотреть файл

@ -43,7 +43,7 @@ int qe_usb_clock_set(enum qe_clock clk, int rate)
spin_lock_irqsave(&cmxgcr_lock, flags);
clrsetbits_be32(&mux->cmxgcr, QE_CMXGCR_USBCS, val);
qe_clrsetbits_be32(&mux->cmxgcr, QE_CMXGCR_USBCS, val);
spin_unlock_irqrestore(&cmxgcr_lock, flags);

Просмотреть файл

@ -10,7 +10,7 @@ config IMX_GPCV2_PM_DOMAINS
config IMX_SCU_SOC
bool "i.MX System Controller Unit SoC info support"
depends on IMX_SCU
depends on IMX_SCU || COMPILE_TEST
select SOC_BUS
help
If you say yes here you get support for the NXP i.MX System

Просмотреть файл

@ -142,10 +142,16 @@ static const struct imx8_soc_data imx8mn_soc_data = {
.soc_revision = imx8mm_soc_revision,
};
static const struct imx8_soc_data imx8mp_soc_data = {
.name = "i.MX8MP",
.soc_revision = imx8mm_soc_revision,
};
static const struct of_device_id imx8_soc_match[] = {
{ .compatible = "fsl,imx8mq", .data = &imx8mq_soc_data, },
{ .compatible = "fsl,imx8mm", .data = &imx8mm_soc_data, },
{ .compatible = "fsl,imx8mn", .data = &imx8mn_soc_data, },
{ .compatible = "fsl,imx8mp", .data = &imx8mp_soc_data, },
{ }
};
@ -204,6 +210,9 @@ static int __init imx8_soc_init(void)
goto free_serial_number;
}
pr_info("SoC: %s revision %s\n", soc_dev_attr->soc_id,
soc_dev_attr->revision);
if (IS_ENABLED(CONFIG_ARM_IMX_CPUFREQ_DT))
platform_device_register_simple("imx-cpufreq-dt", -1, NULL, 0);

Просмотреть файл

@ -12,8 +12,6 @@
#define CMDQ_WRITE_ENABLE_MASK BIT(0)
#define CMDQ_POLL_ENABLE_MASK BIT(0)
#define CMDQ_EOC_IRQ_EN BIT(0)
#define CMDQ_EOC_CMD ((u64)((CMDQ_CODE_EOC << CMDQ_OP_CODE_SHIFT)) \
<< 32 | CMDQ_EOC_IRQ_EN)
struct cmdq_instruction {
union {

Просмотреть файл

@ -45,13 +45,13 @@ config QCOM_GLINK_SSR
neighboring subsystems going up or down.
config QCOM_GSBI
tristate "QCOM General Serial Bus Interface"
depends on ARCH_QCOM || COMPILE_TEST
select MFD_SYSCON
help
Say y here to enable GSBI support. The GSBI provides control
functions for connecting the underlying serial UART, SPI, and I2C
devices to the output pins.
tristate "QCOM General Serial Bus Interface"
depends on ARCH_QCOM || COMPILE_TEST
select MFD_SYSCON
help
Say y here to enable GSBI support. The GSBI provides control
functions for connecting the underlying serial UART, SPI, and I2C
devices to the output pins.
config QCOM_LLCC
tristate "Qualcomm Technologies, Inc. LLCC driver"
@ -71,10 +71,10 @@ config QCOM_OCMEM
depends on ARCH_QCOM
select QCOM_SCM
help
The On Chip Memory (OCMEM) allocator allows various clients to
allocate memory from OCMEM based on performance, latency and power
requirements. This is typically used by the GPU, camera/video, and
audio components on some Snapdragon SoCs.
The On Chip Memory (OCMEM) allocator allows various clients to
allocate memory from OCMEM based on performance, latency and power
requirements. This is typically used by the GPU, camera/video, and
audio components on some Snapdragon SoCs.
config QCOM_PM
bool "Qualcomm Power Management"
@ -198,8 +198,8 @@ config QCOM_APR
depends on ARCH_QCOM || COMPILE_TEST
depends on RPMSG
help
Enable APR IPC protocol support between
application processor and QDSP6. APR is
used by audio driver to configure QDSP6
ASM, ADM and AFE modules.
Enable APR IPC protocol support between
application processor and QDSP6. APR is
used by audio driver to configure QDSP6
ASM, ADM and AFE modules.
endmenu

Просмотреть файл

@ -655,8 +655,12 @@ int qmi_handle_init(struct qmi_handle *qmi, size_t recv_buf_size,
qmi->sock = qmi_sock_create(qmi, &qmi->sq);
if (IS_ERR(qmi->sock)) {
pr_err("failed to create QMI socket\n");
ret = PTR_ERR(qmi->sock);
if (PTR_ERR(qmi->sock) == -EAFNOSUPPORT) {
ret = -EPROBE_DEFER;
} else {
pr_err("failed to create QMI socket\n");
ret = PTR_ERR(qmi->sock);
}
goto err_destroy_wq;
}

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше