Char/Misc driver patches for 4.10-rc1
Here's the big char/misc driver patches for 4.10-rc1. Lots of tiny changes over lots of "minor" driver subsystems, the largest being some new FPGA drivers. Other than that, a few other new drivers, but no new driver subsystems added for this kernel cycle, a nice change. All of these have been in linux-next with no reported issues. Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> -----BEGIN PGP SIGNATURE----- iG0EABECAC0WIQT0tgzFv3jCIUoxPcsxR9QN2y37KQUCWFAtwA8cZ3JlZ0Brcm9h aC5jb20ACgkQMUfUDdst+ykyCgCeJn36u1AsBi7qZ3u/1hwD8k56s2IAnRo6U31r WW65YcNTK7qYXqNbfgIa =/t/V -----END PGP SIGNATURE----- Merge tag 'char-misc-4.10-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc Pull char/misc driver updates from Greg KH: "Here's the big char/misc driver patches for 4.10-rc1. Lots of tiny changes over lots of "minor" driver subsystems, the largest being some new FPGA drivers. Other than that, a few other new drivers, but no new driver subsystems added for this kernel cycle, a nice change. All of these have been in linux-next with no reported issues" * tag 'char-misc-4.10-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc: (107 commits) uio-hv-generic: store physical addresses instead of virtual Tools: hv: kvp: configurable external scripts path uio-hv-generic: new userspace i/o driver for VMBus vmbus: add support for dynamic device id's hv: change clockevents unbind tactics hv: acquire vmbus_connection.channel_mutex in vmbus_free_channels() hyperv: Fix spelling of HV_UNKOWN mei: bus: enable non-blocking RX mei: fix the back to back interrupt handling mei: synchronize irq before initiating a reset. VME: Remove shutdown entry from vme_driver auxdisplay: ht16k33: select framebuffer helper modules MAINTAINERS: add git url for fpga fpga: Clarify how write_init works streaming modes fpga zynq: Fix incorrect ISR state on bootup fpga zynq: Remove priv->dev fpga zynq: Add missing \n to messages fpga: Add COMPILE_TEST to all drivers uio: pruss: add clk_disable() char/pcmcia: add some error checking in scr24x_read() ...
This commit is contained in:
Коммит
b78b499a67
|
@ -0,0 +1,11 @@
|
|||
What: /sys/class/fpga_bridge/<bridge>/name
|
||||
Date: January 2016
|
||||
KernelVersion: 4.5
|
||||
Contact: Alan Tull <atull@opensource.altera.com>
|
||||
Description: Name of low level FPGA bridge driver.
|
||||
|
||||
What: /sys/class/fpga_bridge/<bridge>/state
|
||||
Date: January 2016
|
||||
KernelVersion: 4.5
|
||||
Contact: Alan Tull <atull@opensource.altera.com>
|
||||
Description: Show bridge state as "enabled" or "disabled"
|
|
@ -29,3 +29,19 @@ Description: Display fw status registers content
|
|||
Also number of registers varies between 1 and 6
|
||||
depending on generation.
|
||||
|
||||
What: /sys/class/mei/meiN/hbm_ver
|
||||
Date: Aug 2016
|
||||
KernelVersion: 4.9
|
||||
Contact: Tomas Winkler <tomas.winkler@intel.com>
|
||||
Description: Display the negotiated HBM protocol version.
|
||||
|
||||
The HBM protocol version negotiated
|
||||
between the driver and the device.
|
||||
|
||||
What: /sys/class/mei/meiN/hbm_ver_drv
|
||||
Date: Aug 2016
|
||||
KernelVersion: 4.9
|
||||
Contact: Tomas Winkler <tomas.winkler@intel.com>
|
||||
Description: Display the driver HBM protocol version.
|
||||
|
||||
The HBM protocol version supported by the driver.
|
||||
|
|
|
@ -0,0 +1,42 @@
|
|||
Holtek ht16k33 RAM mapping 16*8 LED controller driver with keyscan
|
||||
-------------------------------------------------------------------------------
|
||||
|
||||
Required properties:
|
||||
- compatible: "holtek,ht16k33"
|
||||
- reg: I2C slave address of the chip.
|
||||
- interrupt-parent: A phandle pointing to the interrupt controller
|
||||
serving the interrupt for this chip.
|
||||
- interrupts: Interrupt specification for the key pressed interrupt.
|
||||
- refresh-rate-hz: Display update interval in HZ.
|
||||
- debounce-delay-ms: Debouncing interval time in milliseconds.
|
||||
- linux,keymap: The keymap for keys as described in the binding
|
||||
document (devicetree/bindings/input/matrix-keymap.txt).
|
||||
|
||||
Optional properties:
|
||||
- linux,no-autorepeat: Disable keyrepeat.
|
||||
- default-brightness-level: Initial brightness level [0-15] (default: 15).
|
||||
|
||||
Example:
|
||||
|
||||
&i2c1 {
|
||||
ht16k33: ht16k33@70 {
|
||||
compatible = "holtek,ht16k33";
|
||||
reg = <0x70>;
|
||||
refresh-rate-hz = <20>;
|
||||
debounce-delay-ms = <50>;
|
||||
interrupt-parent = <&gpio4>;
|
||||
interrupts = <5 (IRQ_TYPE_LEVEL_HIGH | IRQ_TYPE_EDGE_RISING)>;
|
||||
linux,keymap = <
|
||||
MATRIX_KEY(2, 0, KEY_F6)
|
||||
MATRIX_KEY(3, 0, KEY_F8)
|
||||
MATRIX_KEY(4, 0, KEY_F10)
|
||||
MATRIX_KEY(5, 0, KEY_F4)
|
||||
MATRIX_KEY(6, 0, KEY_F2)
|
||||
MATRIX_KEY(2, 1, KEY_F5)
|
||||
MATRIX_KEY(3, 1, KEY_F7)
|
||||
MATRIX_KEY(4, 1, KEY_F9)
|
||||
MATRIX_KEY(5, 1, KEY_F3)
|
||||
MATRIX_KEY(6, 1, KEY_F1)
|
||||
>;
|
||||
};
|
||||
};
|
|
@ -0,0 +1,494 @@
|
|||
FPGA Region Device Tree Binding
|
||||
|
||||
Alan Tull 2016
|
||||
|
||||
CONTENTS
|
||||
- Introduction
|
||||
- Terminology
|
||||
- Sequence
|
||||
- FPGA Region
|
||||
- Supported Use Models
|
||||
- Device Tree Examples
|
||||
- Constraints
|
||||
|
||||
|
||||
Introduction
|
||||
============
|
||||
|
||||
FPGA Regions represent FPGA's and partial reconfiguration regions of FPGA's in
|
||||
the Device Tree. FPGA Regions provide a way to program FPGAs under device tree
|
||||
control.
|
||||
|
||||
This device tree binding document hits some of the high points of FPGA usage and
|
||||
attempts to include terminology used by both major FPGA manufacturers. This
|
||||
document isn't a replacement for any manufacturers specifications for FPGA
|
||||
usage.
|
||||
|
||||
|
||||
Terminology
|
||||
===========
|
||||
|
||||
Full Reconfiguration
|
||||
* The entire FPGA is programmed.
|
||||
|
||||
Partial Reconfiguration (PR)
|
||||
* A section of an FPGA is reprogrammed while the rest of the FPGA is not
|
||||
affected.
|
||||
* Not all FPGA's support PR.
|
||||
|
||||
Partial Reconfiguration Region (PRR)
|
||||
* Also called a "reconfigurable partition"
|
||||
* A PRR is a specific section of a FPGA reserved for reconfiguration.
|
||||
* A base (or static) FPGA image may create a set of PRR's that later may
|
||||
be independently reprogrammed many times.
|
||||
* The size and specific location of each PRR is fixed.
|
||||
* The connections at the edge of each PRR are fixed. The image that is loaded
|
||||
into a PRR must fit and must use a subset of the region's connections.
|
||||
* The busses within the FPGA are split such that each region gets its own
|
||||
branch that may be gated independently.
|
||||
|
||||
Persona
|
||||
* Also called a "partial bit stream"
|
||||
* An FPGA image that is designed to be loaded into a PRR. There may be
|
||||
any number of personas designed to fit into a PRR, but only one at at time
|
||||
may be loaded.
|
||||
* A persona may create more regions.
|
||||
|
||||
FPGA Bridge
|
||||
* FPGA Bridges gate bus signals between a host and FPGA.
|
||||
* FPGA Bridges should be disabled while the FPGA is being programmed to
|
||||
prevent spurious signals on the cpu bus and to the soft logic.
|
||||
* FPGA bridges may be actual hardware or soft logic on an FPGA.
|
||||
* During Full Reconfiguration, hardware bridges between the host and FPGA
|
||||
will be disabled.
|
||||
* During Partial Reconfiguration of a specific region, that region's bridge
|
||||
will be used to gate the busses. Traffic to other regions is not affected.
|
||||
* In some implementations, the FPGA Manager transparantly handles gating the
|
||||
buses, eliminating the need to show the hardware FPGA bridges in the
|
||||
device tree.
|
||||
* An FPGA image may create a set of reprogrammable regions, each having its
|
||||
own bridge and its own split of the busses in the FPGA.
|
||||
|
||||
FPGA Manager
|
||||
* An FPGA Manager is a hardware block that programs an FPGA under the control
|
||||
of a host processor.
|
||||
|
||||
Base Image
|
||||
* Also called the "static image"
|
||||
* An FPGA image that is designed to do full reconfiguration of the FPGA.
|
||||
* A base image may set up a set of partial reconfiguration regions that may
|
||||
later be reprogrammed.
|
||||
|
||||
---------------- ----------------------------------
|
||||
| Host CPU | | FPGA |
|
||||
| | | |
|
||||
| ----| | ----------- -------- |
|
||||
| | H | | |==>| Bridge0 |<==>| PRR0 | |
|
||||
| | W | | | ----------- -------- |
|
||||
| | | | | |
|
||||
| | B |<=====>|<==| ----------- -------- |
|
||||
| | R | | |==>| Bridge1 |<==>| PRR1 | |
|
||||
| | I | | | ----------- -------- |
|
||||
| | D | | | |
|
||||
| | G | | | ----------- -------- |
|
||||
| | E | | |==>| Bridge2 |<==>| PRR2 | |
|
||||
| ----| | ----------- -------- |
|
||||
| | | |
|
||||
---------------- ----------------------------------
|
||||
|
||||
Figure 1: An FPGA set up with a base image that created three regions. Each
|
||||
region (PRR0-2) gets its own split of the busses that is independently gated by
|
||||
a soft logic bridge (Bridge0-2) in the FPGA. The contents of each PRR can be
|
||||
reprogrammed independently while the rest of the system continues to function.
|
||||
|
||||
|
||||
Sequence
|
||||
========
|
||||
|
||||
When a DT overlay that targets a FPGA Region is applied, the FPGA Region will
|
||||
do the following:
|
||||
|
||||
1. Disable appropriate FPGA bridges.
|
||||
2. Program the FPGA using the FPGA manager.
|
||||
3. Enable the FPGA bridges.
|
||||
4. The Device Tree overlay is accepted into the live tree.
|
||||
5. Child devices are populated.
|
||||
|
||||
When the overlay is removed, the child nodes will be removed and the FPGA Region
|
||||
will disable the bridges.
|
||||
|
||||
|
||||
FPGA Region
|
||||
===========
|
||||
|
||||
FPGA Regions represent FPGA's and FPGA PR regions in the device tree. An FPGA
|
||||
Region brings together the elements needed to program on a running system and
|
||||
add the child devices:
|
||||
|
||||
* FPGA Manager
|
||||
* FPGA Bridges
|
||||
* image-specific information needed to to the programming.
|
||||
* child nodes
|
||||
|
||||
The intended use is that a Device Tree overlay (DTO) can be used to reprogram an
|
||||
FPGA while an operating system is running.
|
||||
|
||||
An FPGA Region that exists in the live Device Tree reflects the current state.
|
||||
If the live tree shows a "firmware-name" property or child nodes under a FPGA
|
||||
Region, the FPGA already has been programmed. A DTO that targets a FPGA Region
|
||||
and adds the "firmware-name" property is taken as a request to reprogram the
|
||||
FPGA. After reprogramming is successful, the overlay is accepted into the live
|
||||
tree.
|
||||
|
||||
The base FPGA Region in the device tree represents the FPGA and supports full
|
||||
reconfiguration. It must include a phandle to an FPGA Manager. The base
|
||||
FPGA region will be the child of one of the hardware bridges (the bridge that
|
||||
allows register access) between the cpu and the FPGA. If there are more than
|
||||
one bridge to control during FPGA programming, the region will also contain a
|
||||
list of phandles to the additional hardware FPGA Bridges.
|
||||
|
||||
For partial reconfiguration (PR), each PR region will have an FPGA Region.
|
||||
These FPGA regions are children of FPGA bridges which are then children of the
|
||||
base FPGA region. The "Full Reconfiguration to add PRR's" example below shows
|
||||
this.
|
||||
|
||||
If an FPGA Region does not specify a FPGA Manager, it will inherit the FPGA
|
||||
Manager specified by its ancestor FPGA Region. This supports both the case
|
||||
where the same FPGA Manager is used for all of a FPGA as well the case where
|
||||
a different FPGA Manager is used for each region.
|
||||
|
||||
FPGA Regions do not inherit their ancestor FPGA regions' bridges. This prevents
|
||||
shutting down bridges that are upstream from the other active regions while one
|
||||
region is getting reconfigured (see Figure 1 above). During PR, the FPGA's
|
||||
hardware bridges remain enabled. The PR regions' bridges will be FPGA bridges
|
||||
within the static image of the FPGA.
|
||||
|
||||
Required properties:
|
||||
- compatible : should contain "fpga-region"
|
||||
- fpga-mgr : should contain a phandle to an FPGA Manager. Child FPGA Regions
|
||||
inherit this property from their ancestor regions. A fpga-mgr property
|
||||
in a region will override any inherited FPGA manager.
|
||||
- #address-cells, #size-cells, ranges : must be present to handle address space
|
||||
mapping for child nodes.
|
||||
|
||||
Optional properties:
|
||||
- firmware-name : should contain the name of an FPGA image file located on the
|
||||
firmware search path. If this property shows up in a live device tree
|
||||
it indicates that the FPGA has already been programmed with this image.
|
||||
If this property is in an overlay targeting a FPGA region, it is a
|
||||
request to program the FPGA with that image.
|
||||
- fpga-bridges : should contain a list of phandles to FPGA Bridges that must be
|
||||
controlled during FPGA programming along with the parent FPGA bridge.
|
||||
This property is optional if the FPGA Manager handles the bridges.
|
||||
If the fpga-region is the child of a fpga-bridge, the list should not
|
||||
contain the parent bridge.
|
||||
- partial-fpga-config : boolean, set if partial reconfiguration is to be done,
|
||||
otherwise full reconfiguration is done.
|
||||
- external-fpga-config : boolean, set if the FPGA has already been configured
|
||||
prior to OS boot up.
|
||||
- region-unfreeze-timeout-us : The maximum time in microseconds to wait for
|
||||
bridges to successfully become enabled after the region has been
|
||||
programmed.
|
||||
- region-freeze-timeout-us : The maximum time in microseconds to wait for
|
||||
bridges to successfully become disabled before the region has been
|
||||
programmed.
|
||||
- child nodes : devices in the FPGA after programming.
|
||||
|
||||
In the example below, when an overlay is applied targeting fpga-region0,
|
||||
fpga_mgr is used to program the FPGA. Two bridges are controlled during
|
||||
programming: the parent fpga_bridge0 and fpga_bridge1. Because the region is
|
||||
the child of fpga_bridge0, only fpga_bridge1 needs to be specified in the
|
||||
fpga-bridges property. During programming, these bridges are disabled, the
|
||||
firmware specified in the overlay is loaded to the FPGA using the FPGA manager
|
||||
specified in the region. If FPGA programming succeeds, the bridges are
|
||||
reenabled and the overlay makes it into the live device tree. The child devices
|
||||
are then populated. If FPGA programming fails, the bridges are left disabled
|
||||
and the overlay is rejected. The overlay's ranges property maps the lwhps
|
||||
bridge's region (0xff200000) and the hps bridge's region (0xc0000000) for use by
|
||||
the two child devices.
|
||||
|
||||
Example:
|
||||
Base tree contains:
|
||||
|
||||
fpga_mgr: fpga-mgr@ff706000 {
|
||||
compatible = "altr,socfpga-fpga-mgr";
|
||||
reg = <0xff706000 0x1000
|
||||
0xffb90000 0x20>;
|
||||
interrupts = <0 175 4>;
|
||||
};
|
||||
|
||||
fpga_bridge0: fpga-bridge@ff400000 {
|
||||
compatible = "altr,socfpga-lwhps2fpga-bridge";
|
||||
reg = <0xff400000 0x100000>;
|
||||
resets = <&rst LWHPS2FPGA_RESET>;
|
||||
clocks = <&l4_main_clk>;
|
||||
|
||||
#address-cells = <1>;
|
||||
#size-cells = <1>;
|
||||
ranges;
|
||||
|
||||
fpga_region0: fpga-region0 {
|
||||
compatible = "fpga-region";
|
||||
fpga-mgr = <&fpga_mgr>;
|
||||
};
|
||||
};
|
||||
|
||||
fpga_bridge1: fpga-bridge@ff500000 {
|
||||
compatible = "altr,socfpga-hps2fpga-bridge";
|
||||
reg = <0xff500000 0x10000>;
|
||||
resets = <&rst HPS2FPGA_RESET>;
|
||||
clocks = <&l4_main_clk>;
|
||||
};
|
||||
|
||||
Overlay contains:
|
||||
|
||||
/dts-v1/ /plugin/;
|
||||
/ {
|
||||
fragment@0 {
|
||||
target = <&fpga_region0>;
|
||||
#address-cells = <1>;
|
||||
#size-cells = <1>;
|
||||
__overlay__ {
|
||||
#address-cells = <1>;
|
||||
#size-cells = <1>;
|
||||
|
||||
firmware-name = "soc_system.rbf";
|
||||
fpga-bridges = <&fpga_bridge1>;
|
||||
ranges = <0x20000 0xff200000 0x100000>,
|
||||
<0x0 0xc0000000 0x20000000>;
|
||||
|
||||
gpio@10040 {
|
||||
compatible = "altr,pio-1.0";
|
||||
reg = <0x10040 0x20>;
|
||||
altr,gpio-bank-width = <4>;
|
||||
#gpio-cells = <2>;
|
||||
clocks = <2>;
|
||||
gpio-controller;
|
||||
};
|
||||
|
||||
onchip-memory {
|
||||
device_type = "memory";
|
||||
compatible = "altr,onchipmem-15.1";
|
||||
reg = <0x0 0x10000>;
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
|
||||
Supported Use Models
|
||||
====================
|
||||
|
||||
In all cases the live DT must have the FPGA Manager, FPGA Bridges (if any), and
|
||||
a FPGA Region. The target of the Device Tree Overlay is the FPGA Region. Some
|
||||
uses are specific to a FPGA device.
|
||||
|
||||
* No FPGA Bridges
|
||||
In this case, the FPGA Manager which programs the FPGA also handles the
|
||||
bridges behind the scenes. No FPGA Bridge devices are needed for full
|
||||
reconfiguration.
|
||||
|
||||
* Full reconfiguration with hardware bridges
|
||||
In this case, there are hardware bridges between the processor and FPGA that
|
||||
need to be controlled during full reconfiguration. Before the overlay is
|
||||
applied, the live DT must include the FPGA Manager, FPGA Bridges, and a
|
||||
FPGA Region. The FPGA Region is the child of the bridge that allows
|
||||
register access to the FPGA. Additional bridges may be listed in a
|
||||
fpga-bridges property in the FPGA region or in the device tree overlay.
|
||||
|
||||
* Partial reconfiguration with bridges in the FPGA
|
||||
In this case, the FPGA will have one or more PRR's that may be programmed
|
||||
separately while the rest of the FPGA can remain active. To manage this,
|
||||
bridges need to exist in the FPGA that can gate the buses going to each FPGA
|
||||
region while the buses are enabled for other sections. Before any partial
|
||||
reconfiguration can be done, a base FPGA image must be loaded which includes
|
||||
PRR's with FPGA bridges. The device tree should have a FPGA region for each
|
||||
PRR.
|
||||
|
||||
Device Tree Examples
|
||||
====================
|
||||
|
||||
The intention of this section is to give some simple examples, focusing on
|
||||
the placement of the elements detailed above, especially:
|
||||
* FPGA Manager
|
||||
* FPGA Bridges
|
||||
* FPGA Region
|
||||
* ranges
|
||||
* target-path or target
|
||||
|
||||
For the purposes of this section, I'm dividing the Device Tree into two parts,
|
||||
each with its own requirements. The two parts are:
|
||||
* The live DT prior to the overlay being added
|
||||
* The DT overlay
|
||||
|
||||
The live Device Tree must contain an FPGA Region, an FPGA Manager, and any FPGA
|
||||
Bridges. The FPGA Region's "fpga-mgr" property specifies the manager by phandle
|
||||
to handle programming the FPGA. If the FPGA Region is the child of another FPGA
|
||||
Region, the parent's FPGA Manager is used. If FPGA Bridges need to be involved,
|
||||
they are specified in the FPGA Region by the "fpga-bridges" property. During
|
||||
FPGA programming, the FPGA Region will disable the bridges that are in its
|
||||
"fpga-bridges" list and will re-enable them after FPGA programming has
|
||||
succeeded.
|
||||
|
||||
The Device Tree Overlay will contain:
|
||||
* "target-path" or "target"
|
||||
The insertion point where the the contents of the overlay will go into the
|
||||
live tree. target-path is a full path, while target is a phandle.
|
||||
* "ranges"
|
||||
The address space mapping from processor to FPGA bus(ses).
|
||||
* "firmware-name"
|
||||
Specifies the name of the FPGA image file on the firmware search
|
||||
path. The search path is described in the firmware class documentation.
|
||||
* "partial-fpga-config"
|
||||
This binding is a boolean and should be present if partial reconfiguration
|
||||
is to be done.
|
||||
* child nodes corresponding to hardware that will be loaded in this region of
|
||||
the FPGA.
|
||||
|
||||
Device Tree Example: Full Reconfiguration without Bridges
|
||||
=========================================================
|
||||
|
||||
Live Device Tree contains:
|
||||
fpga_mgr0: fpga-mgr@f8007000 {
|
||||
compatible = "xlnx,zynq-devcfg-1.0";
|
||||
reg = <0xf8007000 0x100>;
|
||||
interrupt-parent = <&intc>;
|
||||
interrupts = <0 8 4>;
|
||||
clocks = <&clkc 12>;
|
||||
clock-names = "ref_clk";
|
||||
syscon = <&slcr>;
|
||||
};
|
||||
|
||||
fpga_region0: fpga-region0 {
|
||||
compatible = "fpga-region";
|
||||
fpga-mgr = <&fpga_mgr0>;
|
||||
#address-cells = <0x1>;
|
||||
#size-cells = <0x1>;
|
||||
ranges;
|
||||
};
|
||||
|
||||
DT Overlay contains:
|
||||
/dts-v1/ /plugin/;
|
||||
/ {
|
||||
fragment@0 {
|
||||
target = <&fpga_region0>;
|
||||
#address-cells = <1>;
|
||||
#size-cells = <1>;
|
||||
__overlay__ {
|
||||
#address-cells = <1>;
|
||||
#size-cells = <1>;
|
||||
|
||||
firmware-name = "zynq-gpio.bin";
|
||||
|
||||
gpio1: gpio@40000000 {
|
||||
compatible = "xlnx,xps-gpio-1.00.a";
|
||||
reg = <0x40000000 0x10000>;
|
||||
gpio-controller;
|
||||
#gpio-cells = <0x2>;
|
||||
xlnx,gpio-width= <0x6>;
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
Device Tree Example: Full Reconfiguration to add PRR's
|
||||
======================================================
|
||||
|
||||
The base FPGA Region is specified similar to the first example above.
|
||||
|
||||
This example programs the FPGA to have two regions that can later be partially
|
||||
configured. Each region has its own bridge in the FPGA fabric.
|
||||
|
||||
DT Overlay contains:
|
||||
/dts-v1/ /plugin/;
|
||||
/ {
|
||||
fragment@0 {
|
||||
target = <&fpga_region0>;
|
||||
#address-cells = <1>;
|
||||
#size-cells = <1>;
|
||||
__overlay__ {
|
||||
#address-cells = <1>;
|
||||
#size-cells = <1>;
|
||||
|
||||
firmware-name = "base.rbf";
|
||||
|
||||
fpga-bridge@4400 {
|
||||
compatible = "altr,freeze-bridge";
|
||||
reg = <0x4400 0x10>;
|
||||
|
||||
fpga_region1: fpga-region1 {
|
||||
compatible = "fpga-region";
|
||||
#address-cells = <0x1>;
|
||||
#size-cells = <0x1>;
|
||||
ranges;
|
||||
};
|
||||
};
|
||||
|
||||
fpga-bridge@4420 {
|
||||
compatible = "altr,freeze-bridge";
|
||||
reg = <0x4420 0x10>;
|
||||
|
||||
fpga_region2: fpga-region2 {
|
||||
compatible = "fpga-region";
|
||||
#address-cells = <0x1>;
|
||||
#size-cells = <0x1>;
|
||||
ranges;
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
Device Tree Example: Partial Reconfiguration
|
||||
============================================
|
||||
|
||||
This example reprograms one of the PRR's set up in the previous example.
|
||||
|
||||
The sequence that occurs when this overlay is similar to the above, the only
|
||||
differences are that the FPGA is partially reconfigured due to the
|
||||
"partial-fpga-config" boolean and the only bridge that is controlled during
|
||||
programming is the FPGA based bridge of fpga_region1.
|
||||
|
||||
/dts-v1/ /plugin/;
|
||||
/ {
|
||||
fragment@0 {
|
||||
target = <&fpga_region1>;
|
||||
#address-cells = <1>;
|
||||
#size-cells = <1>;
|
||||
__overlay__ {
|
||||
#address-cells = <1>;
|
||||
#size-cells = <1>;
|
||||
|
||||
firmware-name = "soc_image2.rbf";
|
||||
partial-fpga-config;
|
||||
|
||||
gpio@10040 {
|
||||
compatible = "altr,pio-1.0";
|
||||
reg = <0x10040 0x20>;
|
||||
clocks = <0x2>;
|
||||
altr,gpio-bank-width = <0x4>;
|
||||
resetvalue = <0x0>;
|
||||
#gpio-cells = <0x2>;
|
||||
gpio-controller;
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
Constraints
|
||||
===========
|
||||
|
||||
It is beyond the scope of this document to fully describe all the FPGA design
|
||||
constraints required to make partial reconfiguration work[1] [2] [3], but a few
|
||||
deserve quick mention.
|
||||
|
||||
A persona must have boundary connections that line up with those of the partion
|
||||
or region it is designed to go into.
|
||||
|
||||
During programming, transactions through those connections must be stopped and
|
||||
the connections must be held at a fixed logic level. This can be achieved by
|
||||
FPGA Bridges that exist on the FPGA fabric prior to the partial reconfiguration.
|
||||
|
||||
--
|
||||
[1] www.altera.com/content/dam/altera-www/global/en_US/pdfs/literature/ug/ug_partrecon.pdf
|
||||
[2] tspace.library.utoronto.ca/bitstream/1807/67932/1/Byma_Stuart_A_201411_MAS_thesis.pdf
|
||||
[3] http://www.xilinx.com/support/documentation/sw_manuals/xilinx14_1/ug702.pdf
|
|
@ -0,0 +1,17 @@
|
|||
Broadcom OTP memory controller
|
||||
|
||||
Required Properties:
|
||||
- compatible: "brcm,ocotp" for the first generation Broadcom OTPC which is used
|
||||
in Cygnus and supports 32 bit read/write. Use "brcm,ocotp-v2" for the second
|
||||
generation Broadcom OTPC which is used in SoC's such as Stingray and supports
|
||||
64-bit read/write.
|
||||
- reg: Base address of the OTP controller.
|
||||
- brcm,ocotp-size: Amount of memory available, in 32 bit words
|
||||
|
||||
Example:
|
||||
|
||||
otp: otp@0301c800 {
|
||||
compatible = "brcm,ocotp";
|
||||
reg = <0x0301c800 0x2c>;
|
||||
brcm,ocotp-size = <2048>;
|
||||
};
|
|
@ -0,0 +1,20 @@
|
|||
* NXP LPC18xx OTP memory
|
||||
|
||||
Internal OTP (One Time Programmable) memory for NXP LPC18xx/43xx devices.
|
||||
|
||||
Required properties:
|
||||
- compatible: Should be "nxp,lpc1850-otp"
|
||||
- reg: Must contain an entry with the physical base address and length
|
||||
for each entry in reg-names.
|
||||
- address-cells: must be set to 1.
|
||||
- size-cells: must be set to 1.
|
||||
|
||||
See nvmem.txt for more information.
|
||||
|
||||
Example:
|
||||
otp: otp@40045000 {
|
||||
compatible = "nxp,lpc1850-otp";
|
||||
reg = <0x40045000 0x1000>;
|
||||
#address-cells = <1>;
|
||||
#size-cells = <1>;
|
||||
};
|
|
@ -127,6 +127,7 @@ hitex Hitex Development Tools
|
|||
holt Holt Integrated Circuits, Inc.
|
||||
honeywell Honeywell
|
||||
hp Hewlett Packard
|
||||
holtek Holtek Semiconductor, Inc.
|
||||
i2se I2SE GmbH
|
||||
ibm International Business Machines (IBM)
|
||||
idt Integrated Device Technologies, Inc.
|
||||
|
|
|
@ -18,31 +18,37 @@ API Functions:
|
|||
To program the FPGA from a file or from a buffer:
|
||||
-------------------------------------------------
|
||||
|
||||
int fpga_mgr_buf_load(struct fpga_manager *mgr, u32 flags,
|
||||
int fpga_mgr_buf_load(struct fpga_manager *mgr,
|
||||
struct fpga_image_info *info,
|
||||
const char *buf, size_t count);
|
||||
|
||||
Load the FPGA from an image which exists as a buffer in memory.
|
||||
|
||||
int fpga_mgr_firmware_load(struct fpga_manager *mgr, u32 flags,
|
||||
int fpga_mgr_firmware_load(struct fpga_manager *mgr,
|
||||
struct fpga_image_info *info,
|
||||
const char *image_name);
|
||||
|
||||
Load the FPGA from an image which exists as a file. The image file must be on
|
||||
the firmware search path (see the firmware class documentation).
|
||||
|
||||
For both these functions, flags == 0 for normal full reconfiguration or
|
||||
FPGA_MGR_PARTIAL_RECONFIG for partial reconfiguration. If successful, the FPGA
|
||||
ends up in operating mode. Return 0 on success or a negative error code.
|
||||
the firmware search path (see the firmware class documentation). If successful,
|
||||
the FPGA ends up in operating mode. Return 0 on success or a negative error
|
||||
code.
|
||||
|
||||
A FPGA design contained in a FPGA image file will likely have particulars that
|
||||
affect how the image is programmed to the FPGA. These are contained in struct
|
||||
fpga_image_info. Currently the only such particular is a single flag bit
|
||||
indicating whether the image is for full or partial reconfiguration.
|
||||
|
||||
To get/put a reference to a FPGA manager:
|
||||
-----------------------------------------
|
||||
|
||||
struct fpga_manager *of_fpga_mgr_get(struct device_node *node);
|
||||
struct fpga_manager *fpga_mgr_get(struct device *dev);
|
||||
|
||||
Given a DT node or device, get an exclusive reference to a FPGA manager.
|
||||
|
||||
void fpga_mgr_put(struct fpga_manager *mgr);
|
||||
|
||||
Given a DT node, get an exclusive reference to a FPGA manager or release
|
||||
the reference.
|
||||
Release the reference.
|
||||
|
||||
|
||||
To register or unregister the low level FPGA-specific driver:
|
||||
|
@ -70,8 +76,11 @@ struct device_node *mgr_node = ...
|
|||
char *buf = ...
|
||||
int count = ...
|
||||
|
||||
/* struct with information about the FPGA image to program. */
|
||||
struct fpga_image_info info;
|
||||
|
||||
/* flags indicates whether to do full or partial reconfiguration */
|
||||
int flags = 0;
|
||||
info.flags = 0;
|
||||
|
||||
int ret;
|
||||
|
||||
|
@ -79,7 +88,7 @@ int ret;
|
|||
struct fpga_manager *mgr = of_fpga_mgr_get(mgr_node);
|
||||
|
||||
/* Load the buffer to the FPGA */
|
||||
ret = fpga_mgr_buf_load(mgr, flags, buf, count);
|
||||
ret = fpga_mgr_buf_load(mgr, &info, buf, count);
|
||||
|
||||
/* Release the FPGA manager */
|
||||
fpga_mgr_put(mgr);
|
||||
|
@ -96,8 +105,11 @@ struct device_node *mgr_node = ...
|
|||
/* FPGA image is in this file which is in the firmware search path */
|
||||
const char *path = "fpga-image-9.rbf"
|
||||
|
||||
/* struct with information about the FPGA image to program. */
|
||||
struct fpga_image_info info;
|
||||
|
||||
/* flags indicates whether to do full or partial reconfiguration */
|
||||
int flags = 0;
|
||||
info.flags = 0;
|
||||
|
||||
int ret;
|
||||
|
||||
|
@ -105,7 +117,7 @@ int ret;
|
|||
struct fpga_manager *mgr = of_fpga_mgr_get(mgr_node);
|
||||
|
||||
/* Get the firmware image (path) and load it to the FPGA */
|
||||
ret = fpga_mgr_firmware_load(mgr, flags, path);
|
||||
ret = fpga_mgr_firmware_load(mgr, &info, path);
|
||||
|
||||
/* Release the FPGA manager */
|
||||
fpga_mgr_put(mgr);
|
||||
|
@ -157,7 +169,10 @@ The programming sequence is:
|
|||
2. .write (may be called once or multiple times)
|
||||
3. .write_complete
|
||||
|
||||
The .write_init function will prepare the FPGA to receive the image data.
|
||||
The .write_init function will prepare the FPGA to receive the image data. The
|
||||
buffer passed into .write_init will be atmost .initial_header_size bytes long,
|
||||
if the whole bitstream is not immediately available then the core code will
|
||||
buffer up at least this much before starting.
|
||||
|
||||
The .write function writes a buffer to the FPGA. The buffer may be contain the
|
||||
whole FPGA image or may be a smaller chunk of an FPGA image. In the latter
|
||||
|
|
|
@ -97,3 +97,25 @@ $ echo 0 > /sys/bus/intel_th/devices/0-msc0/active
|
|||
# and now you can collect the trace from the device node:
|
||||
|
||||
$ cat /dev/intel_th0/msc0 > my_stp_trace
|
||||
|
||||
Host Debugger Mode
|
||||
==================
|
||||
|
||||
It is possible to configure the Trace Hub and control its trace
|
||||
capture from a remote debug host, which should be connected via one of
|
||||
the hardware debugging interfaces, which will then be used to both
|
||||
control Intel Trace Hub and transfer its trace data to the debug host.
|
||||
|
||||
The driver needs to be told that such an arrangement is taking place
|
||||
so that it does not touch any capture/port configuration and avoids
|
||||
conflicting with the debug host's configuration accesses. The only
|
||||
activity that the driver will perform in this mode is collecting
|
||||
software traces to the Software Trace Hub (an stm class device). The
|
||||
user is still responsible for setting up adequate master/channel
|
||||
mappings that the decoder on the receiving end would recognize.
|
||||
|
||||
In order to enable the host mode, set the 'host_mode' parameter of the
|
||||
'intel_th' kernel module to 'y'. None of the virtual output devices
|
||||
will show up on the intel_th bus. Also, trace configuration and
|
||||
capture controlling attribute groups of the 'gth' device will not be
|
||||
exposed. The 'sth' device will operate as usual.
|
||||
|
|
|
@ -69,12 +69,43 @@ stm device's channel mmio region is 64 bytes and hardware page size is
|
|||
width==64, you should be able to mmap() one page on this file
|
||||
descriptor and obtain direct access to an mmio region for 64 channels.
|
||||
|
||||
For kernel-based trace sources, there is "stm_source" device
|
||||
class. Devices of this class can be connected and disconnected to/from
|
||||
stm devices at runtime via a sysfs attribute.
|
||||
|
||||
Examples of STM devices are Intel(R) Trace Hub [1] and Coresight STM
|
||||
[2].
|
||||
|
||||
stm_source
|
||||
==========
|
||||
|
||||
For kernel-based trace sources, there is "stm_source" device
|
||||
class. Devices of this class can be connected and disconnected to/from
|
||||
stm devices at runtime via a sysfs attribute called "stm_source_link"
|
||||
by writing the name of the desired stm device there, for example:
|
||||
|
||||
$ echo dummy_stm.0 > /sys/class/stm_source/console/stm_source_link
|
||||
|
||||
For examples on how to use stm_source interface in the kernel, refer
|
||||
to stm_console or stm_heartbeat drivers.
|
||||
|
||||
Each stm_source device will need to assume a master and a range of
|
||||
channels, depending on how many channels it requires. These are
|
||||
allocated for the device according to the policy configuration. If
|
||||
there's a node in the root of the policy directory that matches the
|
||||
stm_source device's name (for example, "console"), this node will be
|
||||
used to allocate master and channel numbers. If there's no such policy
|
||||
node, the stm core will pick the first contiguous chunk of channels
|
||||
within the first available master. Note that the node must exist
|
||||
before the stm_source device is connected to its stm device.
|
||||
|
||||
stm_console
|
||||
===========
|
||||
|
||||
One implementation of this interface also used in the example above is
|
||||
the "stm_console" driver, which basically provides a one-way console
|
||||
for kernel messages over an stm device.
|
||||
|
||||
To configure the master/channel pair that will be assigned to this
|
||||
console in the STP stream, create a "console" policy entry (see the
|
||||
beginning of this text on how to do that). When initialized, it will
|
||||
consume one channel.
|
||||
|
||||
[1] https://software.intel.com/sites/default/files/managed/d3/3c/intel-th-developer-manual.pdf
|
||||
[2] http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0444b/index.html
|
||||
|
|
20
MAINTAINERS
20
MAINTAINERS
|
@ -3067,6 +3067,12 @@ F: drivers/usb/host/whci/
|
|||
F: drivers/usb/wusbcore/
|
||||
F: include/linux/usb/wusb*
|
||||
|
||||
HT16K33 LED CONTROLLER DRIVER
|
||||
M: Robin van der Gracht <robin@protonic.nl>
|
||||
S: Maintained
|
||||
F: drivers/auxdisplay/ht16k33.c
|
||||
F: Documentation/devicetree/bindings/display/ht16k33.txt
|
||||
|
||||
CFAG12864B LCD DRIVER
|
||||
M: Miguel Ojeda Sandonis <miguel.ojeda.sandonis@gmail.com>
|
||||
W: http://miguelojeda.es/auxdisplay.htm
|
||||
|
@ -5043,7 +5049,9 @@ K: fmc_d.*register
|
|||
FPGA MANAGER FRAMEWORK
|
||||
M: Alan Tull <atull@opensource.altera.com>
|
||||
R: Moritz Fischer <moritz.fischer@ettus.com>
|
||||
L: linux-fpga@vger.kernel.org
|
||||
S: Maintained
|
||||
T: git git://git.kernel.org/pub/scm/linux/kernel/git/atull/linux-fpga.git
|
||||
F: drivers/fpga/
|
||||
F: include/linux/fpga/fpga-mgr.h
|
||||
W: http://www.rocketboards.org
|
||||
|
@ -5940,6 +5948,7 @@ F: drivers/input/serio/hyperv-keyboard.c
|
|||
F: drivers/pci/host/pci-hyperv.c
|
||||
F: drivers/net/hyperv/
|
||||
F: drivers/scsi/storvsc_drv.c
|
||||
F: drivers/uio/uio_hv_generic.c
|
||||
F: drivers/video/fbdev/hyperv_fb.c
|
||||
F: include/linux/hyperv.h
|
||||
F: tools/hv/
|
||||
|
@ -9231,7 +9240,7 @@ F: drivers/misc/panel.c
|
|||
|
||||
PARALLEL PORT SUBSYSTEM
|
||||
M: Sudip Mukherjee <sudipm.mukherjee@gmail.com>
|
||||
M: Sudip Mukherjee <sudip@vectorindia.org>
|
||||
M: Sudip Mukherjee <sudip.mukherjee@codethink.co.uk>
|
||||
L: linux-parport@lists.infradead.org (subscribers-only)
|
||||
S: Maintained
|
||||
F: drivers/parport/
|
||||
|
@ -10841,6 +10850,11 @@ W: http://www.sunplus.com
|
|||
S: Supported
|
||||
F: arch/score/
|
||||
|
||||
SCR24X CHIP CARD INTERFACE DRIVER
|
||||
M: Lubomir Rintel <lkundrak@v3.sk>
|
||||
S: Supported
|
||||
F: drivers/char/pcmcia/scr24x_cs.c
|
||||
|
||||
SYSTEM CONTROL & POWER INTERFACE (SCPI) Message Protocol drivers
|
||||
M: Sudeep Holla <sudeep.holla@arm.com>
|
||||
L: linux-arm-kernel@lists.infradead.org
|
||||
|
@ -11244,7 +11258,7 @@ F: include/media/i2c/ov2659.h
|
|||
SILICON MOTION SM712 FRAME BUFFER DRIVER
|
||||
M: Sudip Mukherjee <sudipm.mukherjee@gmail.com>
|
||||
M: Teddy Wang <teddy.wang@siliconmotion.com>
|
||||
M: Sudip Mukherjee <sudip@vectorindia.org>
|
||||
M: Sudip Mukherjee <sudip.mukherjee@codethink.co.uk>
|
||||
L: linux-fbdev@vger.kernel.org
|
||||
S: Maintained
|
||||
F: drivers/video/fbdev/sm712*
|
||||
|
@ -11672,7 +11686,7 @@ F: drivers/staging/rtl8712/
|
|||
STAGING - SILICON MOTION SM750 FRAME BUFFER DRIVER
|
||||
M: Sudip Mukherjee <sudipm.mukherjee@gmail.com>
|
||||
M: Teddy Wang <teddy.wang@siliconmotion.com>
|
||||
M: Sudip Mukherjee <sudip@vectorindia.org>
|
||||
M: Sudip Mukherjee <sudip.mukherjee@codethink.co.uk>
|
||||
L: linux-fbdev@vger.kernel.org
|
||||
S: Maintained
|
||||
F: drivers/staging/sm750fb/
|
||||
|
|
|
@ -1,4 +1,6 @@
|
|||
/* Load firmware into Core B on a BF561
|
||||
*
|
||||
* Author: Bas Vermeulen <bvermeul@blackstar.xs4all.nl>
|
||||
*
|
||||
* Copyright 2004-2009 Analog Devices Inc.
|
||||
* Licensed under the GPL-2 or later.
|
||||
|
@ -14,9 +16,9 @@
|
|||
|
||||
#include <linux/device.h>
|
||||
#include <linux/fs.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/miscdevice.h>
|
||||
#include <linux/module.h>
|
||||
|
||||
#define CMD_COREB_START _IO('b', 0)
|
||||
#define CMD_COREB_STOP _IO('b', 1)
|
||||
|
@ -59,8 +61,4 @@ static struct miscdevice coreb_dev = {
|
|||
.name = "coreb",
|
||||
.fops = &coreb_fops,
|
||||
};
|
||||
module_misc_device(coreb_dev);
|
||||
|
||||
MODULE_AUTHOR("Bas Vermeulen <bvermeul@blackstar.xs4all.nl>");
|
||||
MODULE_DESCRIPTION("BF561 Core B Support");
|
||||
MODULE_LICENSE("GPL");
|
||||
builtin_misc_device(coreb_dev);
|
||||
|
|
|
@ -128,4 +128,17 @@ config IMG_ASCII_LCD
|
|||
development boards such as the MIPS Boston, MIPS Malta & MIPS SEAD3
|
||||
from Imagination Technologies.
|
||||
|
||||
config HT16K33
|
||||
tristate "Holtek Ht16K33 LED controller with keyscan"
|
||||
depends on FB && OF && I2C && INPUT
|
||||
select FB_SYS_FOPS
|
||||
select FB_CFB_FILLRECT
|
||||
select FB_CFB_COPYAREA
|
||||
select FB_CFB_IMAGEBLIT
|
||||
select INPUT_MATRIXKMAP
|
||||
select FB_BACKLIGHT
|
||||
help
|
||||
Say yes here to add support for Holtek HT16K33, RAM mapping 16*8
|
||||
LED controller driver with keyscan.
|
||||
|
||||
endif # AUXDISPLAY
|
||||
|
|
|
@ -5,3 +5,4 @@
|
|||
obj-$(CONFIG_KS0108) += ks0108.o
|
||||
obj-$(CONFIG_CFAG12864B) += cfag12864b.o cfag12864bfb.o
|
||||
obj-$(CONFIG_IMG_ASCII_LCD) += img-ascii-lcd.o
|
||||
obj-$(CONFIG_HT16K33) += ht16k33.o
|
||||
|
|
|
@ -0,0 +1,563 @@
|
|||
/*
|
||||
* HT16K33 driver
|
||||
*
|
||||
* Author: Robin van der Gracht <robin@protonic.nl>
|
||||
*
|
||||
* Copyright: (C) 2016 Protonic Holland.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
* published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful, but
|
||||
* WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
||||
* General Public License for more details.
|
||||
*/
|
||||
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/i2c.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/fb.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/backlight.h>
|
||||
#include <linux/input.h>
|
||||
#include <linux/input/matrix_keypad.h>
|
||||
#include <linux/workqueue.h>
|
||||
#include <linux/mm.h>
|
||||
|
||||
/* Registers */
|
||||
#define REG_SYSTEM_SETUP 0x20
|
||||
#define REG_SYSTEM_SETUP_OSC_ON BIT(0)
|
||||
|
||||
#define REG_DISPLAY_SETUP 0x80
|
||||
#define REG_DISPLAY_SETUP_ON BIT(0)
|
||||
|
||||
#define REG_ROWINT_SET 0xA0
|
||||
#define REG_ROWINT_SET_INT_EN BIT(0)
|
||||
#define REG_ROWINT_SET_INT_ACT_HIGH BIT(1)
|
||||
|
||||
#define REG_BRIGHTNESS 0xE0
|
||||
|
||||
/* Defines */
|
||||
#define DRIVER_NAME "ht16k33"
|
||||
|
||||
#define MIN_BRIGHTNESS 0x1
|
||||
#define MAX_BRIGHTNESS 0x10
|
||||
|
||||
#define HT16K33_MATRIX_LED_MAX_COLS 8
|
||||
#define HT16K33_MATRIX_LED_MAX_ROWS 16
|
||||
#define HT16K33_MATRIX_KEYPAD_MAX_COLS 3
|
||||
#define HT16K33_MATRIX_KEYPAD_MAX_ROWS 12
|
||||
|
||||
#define BYTES_PER_ROW (HT16K33_MATRIX_LED_MAX_ROWS / 8)
|
||||
#define HT16K33_FB_SIZE (HT16K33_MATRIX_LED_MAX_COLS * BYTES_PER_ROW)
|
||||
|
||||
struct ht16k33_keypad {
|
||||
struct input_dev *dev;
|
||||
spinlock_t lock;
|
||||
struct delayed_work work;
|
||||
uint32_t cols;
|
||||
uint32_t rows;
|
||||
uint32_t row_shift;
|
||||
uint32_t debounce_ms;
|
||||
uint16_t last_key_state[HT16K33_MATRIX_KEYPAD_MAX_COLS];
|
||||
};
|
||||
|
||||
struct ht16k33_fbdev {
|
||||
struct fb_info *info;
|
||||
uint32_t refresh_rate;
|
||||
uint8_t *buffer;
|
||||
uint8_t *cache;
|
||||
struct delayed_work work;
|
||||
};
|
||||
|
||||
struct ht16k33_priv {
|
||||
struct i2c_client *client;
|
||||
struct ht16k33_keypad keypad;
|
||||
struct ht16k33_fbdev fbdev;
|
||||
struct workqueue_struct *workqueue;
|
||||
};
|
||||
|
||||
static struct fb_fix_screeninfo ht16k33_fb_fix = {
|
||||
.id = DRIVER_NAME,
|
||||
.type = FB_TYPE_PACKED_PIXELS,
|
||||
.visual = FB_VISUAL_MONO10,
|
||||
.xpanstep = 0,
|
||||
.ypanstep = 0,
|
||||
.ywrapstep = 0,
|
||||
.line_length = HT16K33_MATRIX_LED_MAX_ROWS,
|
||||
.accel = FB_ACCEL_NONE,
|
||||
};
|
||||
|
||||
static struct fb_var_screeninfo ht16k33_fb_var = {
|
||||
.xres = HT16K33_MATRIX_LED_MAX_ROWS,
|
||||
.yres = HT16K33_MATRIX_LED_MAX_COLS,
|
||||
.xres_virtual = HT16K33_MATRIX_LED_MAX_ROWS,
|
||||
.yres_virtual = HT16K33_MATRIX_LED_MAX_COLS,
|
||||
.bits_per_pixel = 1,
|
||||
.red = { 0, 1, 0 },
|
||||
.green = { 0, 1, 0 },
|
||||
.blue = { 0, 1, 0 },
|
||||
.left_margin = 0,
|
||||
.right_margin = 0,
|
||||
.upper_margin = 0,
|
||||
.lower_margin = 0,
|
||||
.vmode = FB_VMODE_NONINTERLACED,
|
||||
};
|
||||
|
||||
static int ht16k33_display_on(struct ht16k33_priv *priv)
|
||||
{
|
||||
uint8_t data = REG_DISPLAY_SETUP | REG_DISPLAY_SETUP_ON;
|
||||
|
||||
return i2c_smbus_write_byte(priv->client, data);
|
||||
}
|
||||
|
||||
static int ht16k33_display_off(struct ht16k33_priv *priv)
|
||||
{
|
||||
return i2c_smbus_write_byte(priv->client, REG_DISPLAY_SETUP);
|
||||
}
|
||||
|
||||
static void ht16k33_fb_queue(struct ht16k33_priv *priv)
|
||||
{
|
||||
struct ht16k33_fbdev *fbdev = &priv->fbdev;
|
||||
|
||||
queue_delayed_work(priv->workqueue, &fbdev->work,
|
||||
msecs_to_jiffies(HZ / fbdev->refresh_rate));
|
||||
}
|
||||
|
||||
static void ht16k33_keypad_queue(struct ht16k33_priv *priv)
|
||||
{
|
||||
struct ht16k33_keypad *keypad = &priv->keypad;
|
||||
|
||||
queue_delayed_work(priv->workqueue, &keypad->work,
|
||||
msecs_to_jiffies(keypad->debounce_ms));
|
||||
}
|
||||
|
||||
/*
|
||||
* This gets the fb data from cache and copies it to ht16k33 display RAM
|
||||
*/
|
||||
static void ht16k33_fb_update(struct work_struct *work)
|
||||
{
|
||||
struct ht16k33_fbdev *fbdev =
|
||||
container_of(work, struct ht16k33_fbdev, work.work);
|
||||
struct ht16k33_priv *priv =
|
||||
container_of(fbdev, struct ht16k33_priv, fbdev);
|
||||
|
||||
uint8_t *p1, *p2;
|
||||
int len, pos = 0, first = -1;
|
||||
|
||||
p1 = fbdev->cache;
|
||||
p2 = fbdev->buffer;
|
||||
|
||||
/* Search for the first byte with changes */
|
||||
while (pos < HT16K33_FB_SIZE && first < 0) {
|
||||
if (*(p1++) - *(p2++))
|
||||
first = pos;
|
||||
pos++;
|
||||
}
|
||||
|
||||
/* No changes found */
|
||||
if (first < 0)
|
||||
goto requeue;
|
||||
|
||||
len = HT16K33_FB_SIZE - first;
|
||||
p1 = fbdev->cache + HT16K33_FB_SIZE - 1;
|
||||
p2 = fbdev->buffer + HT16K33_FB_SIZE - 1;
|
||||
|
||||
/* Determine i2c transfer length */
|
||||
while (len > 1) {
|
||||
if (*(p1--) - *(p2--))
|
||||
break;
|
||||
len--;
|
||||
}
|
||||
|
||||
p1 = fbdev->cache + first;
|
||||
p2 = fbdev->buffer + first;
|
||||
if (!i2c_smbus_write_i2c_block_data(priv->client, first, len, p2))
|
||||
memcpy(p1, p2, len);
|
||||
requeue:
|
||||
ht16k33_fb_queue(priv);
|
||||
}
|
||||
|
||||
static int ht16k33_keypad_start(struct input_dev *dev)
|
||||
{
|
||||
struct ht16k33_priv *priv = input_get_drvdata(dev);
|
||||
struct ht16k33_keypad *keypad = &priv->keypad;
|
||||
|
||||
/*
|
||||
* Schedule an immediate key scan to capture current key state;
|
||||
* columns will be activated and IRQs be enabled after the scan.
|
||||
*/
|
||||
queue_delayed_work(priv->workqueue, &keypad->work, 0);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void ht16k33_keypad_stop(struct input_dev *dev)
|
||||
{
|
||||
struct ht16k33_priv *priv = input_get_drvdata(dev);
|
||||
struct ht16k33_keypad *keypad = &priv->keypad;
|
||||
|
||||
cancel_delayed_work(&keypad->work);
|
||||
/*
|
||||
* ht16k33_keypad_scan() will leave IRQs enabled;
|
||||
* we should disable them now.
|
||||
*/
|
||||
disable_irq_nosync(priv->client->irq);
|
||||
}
|
||||
|
||||
static int ht16k33_initialize(struct ht16k33_priv *priv)
|
||||
{
|
||||
uint8_t byte;
|
||||
int err;
|
||||
uint8_t data[HT16K33_MATRIX_LED_MAX_COLS * 2];
|
||||
|
||||
/* Clear RAM (8 * 16 bits) */
|
||||
memset(data, 0, sizeof(data));
|
||||
err = i2c_smbus_write_block_data(priv->client, 0, sizeof(data), data);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
/* Turn on internal oscillator */
|
||||
byte = REG_SYSTEM_SETUP_OSC_ON | REG_SYSTEM_SETUP;
|
||||
err = i2c_smbus_write_byte(priv->client, byte);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
/* Configure INT pin */
|
||||
byte = REG_ROWINT_SET | REG_ROWINT_SET_INT_ACT_HIGH;
|
||||
if (priv->client->irq > 0)
|
||||
byte |= REG_ROWINT_SET_INT_EN;
|
||||
return i2c_smbus_write_byte(priv->client, byte);
|
||||
}
|
||||
|
||||
/*
|
||||
* This gets the keys from keypad and reports it to input subsystem
|
||||
*/
|
||||
static void ht16k33_keypad_scan(struct work_struct *work)
|
||||
{
|
||||
struct ht16k33_keypad *keypad =
|
||||
container_of(work, struct ht16k33_keypad, work.work);
|
||||
struct ht16k33_priv *priv =
|
||||
container_of(keypad, struct ht16k33_priv, keypad);
|
||||
const unsigned short *keycodes = keypad->dev->keycode;
|
||||
uint16_t bits_changed, new_state[HT16K33_MATRIX_KEYPAD_MAX_COLS];
|
||||
uint8_t data[HT16K33_MATRIX_KEYPAD_MAX_COLS * 2];
|
||||
int row, col, code;
|
||||
bool reschedule = false;
|
||||
|
||||
if (i2c_smbus_read_i2c_block_data(priv->client, 0x40, 6, data) != 6) {
|
||||
dev_err(&priv->client->dev, "Failed to read key data\n");
|
||||
goto end;
|
||||
}
|
||||
|
||||
for (col = 0; col < keypad->cols; col++) {
|
||||
new_state[col] = (data[col * 2 + 1] << 8) | data[col * 2];
|
||||
if (new_state[col])
|
||||
reschedule = true;
|
||||
bits_changed = keypad->last_key_state[col] ^ new_state[col];
|
||||
|
||||
while (bits_changed) {
|
||||
row = ffs(bits_changed) - 1;
|
||||
code = MATRIX_SCAN_CODE(row, col, keypad->row_shift);
|
||||
input_event(keypad->dev, EV_MSC, MSC_SCAN, code);
|
||||
input_report_key(keypad->dev, keycodes[code],
|
||||
new_state[col] & BIT(row));
|
||||
bits_changed &= ~BIT(row);
|
||||
}
|
||||
}
|
||||
input_sync(keypad->dev);
|
||||
memcpy(keypad->last_key_state, new_state, sizeof(new_state));
|
||||
|
||||
end:
|
||||
if (reschedule)
|
||||
ht16k33_keypad_queue(priv);
|
||||
else
|
||||
enable_irq(priv->client->irq);
|
||||
}
|
||||
|
||||
static irqreturn_t ht16k33_irq_thread(int irq, void *dev)
|
||||
{
|
||||
struct ht16k33_priv *priv = dev;
|
||||
|
||||
disable_irq_nosync(priv->client->irq);
|
||||
ht16k33_keypad_queue(priv);
|
||||
|
||||
return IRQ_HANDLED;
|
||||
}
|
||||
|
||||
static int ht16k33_bl_update_status(struct backlight_device *bl)
|
||||
{
|
||||
int brightness = bl->props.brightness;
|
||||
struct ht16k33_priv *priv = bl_get_data(bl);
|
||||
|
||||
if (bl->props.power != FB_BLANK_UNBLANK ||
|
||||
bl->props.fb_blank != FB_BLANK_UNBLANK ||
|
||||
bl->props.state & BL_CORE_FBBLANK || brightness == 0) {
|
||||
return ht16k33_display_off(priv);
|
||||
}
|
||||
|
||||
ht16k33_display_on(priv);
|
||||
return i2c_smbus_write_byte(priv->client,
|
||||
REG_BRIGHTNESS | (brightness - 1));
|
||||
}
|
||||
|
||||
static int ht16k33_bl_check_fb(struct backlight_device *bl, struct fb_info *fi)
|
||||
{
|
||||
struct ht16k33_priv *priv = bl_get_data(bl);
|
||||
|
||||
return (fi == NULL) || (fi->par == priv);
|
||||
}
|
||||
|
||||
static const struct backlight_ops ht16k33_bl_ops = {
|
||||
.update_status = ht16k33_bl_update_status,
|
||||
.check_fb = ht16k33_bl_check_fb,
|
||||
};
|
||||
|
||||
static int ht16k33_mmap(struct fb_info *info, struct vm_area_struct *vma)
|
||||
{
|
||||
struct ht16k33_priv *priv = info->par;
|
||||
|
||||
return vm_insert_page(vma, vma->vm_start,
|
||||
virt_to_page(priv->fbdev.buffer));
|
||||
}
|
||||
|
||||
static struct fb_ops ht16k33_fb_ops = {
|
||||
.owner = THIS_MODULE,
|
||||
.fb_read = fb_sys_read,
|
||||
.fb_write = fb_sys_write,
|
||||
.fb_fillrect = sys_fillrect,
|
||||
.fb_copyarea = sys_copyarea,
|
||||
.fb_imageblit = sys_imageblit,
|
||||
.fb_mmap = ht16k33_mmap,
|
||||
};
|
||||
|
||||
static int ht16k33_probe(struct i2c_client *client,
|
||||
const struct i2c_device_id *id)
|
||||
{
|
||||
int err;
|
||||
uint32_t rows, cols, dft_brightness;
|
||||
struct backlight_device *bl;
|
||||
struct backlight_properties bl_props;
|
||||
struct ht16k33_priv *priv;
|
||||
struct ht16k33_keypad *keypad;
|
||||
struct ht16k33_fbdev *fbdev;
|
||||
struct device_node *node = client->dev.of_node;
|
||||
|
||||
if (!i2c_check_functionality(client->adapter, I2C_FUNC_I2C)) {
|
||||
dev_err(&client->dev, "i2c_check_functionality error\n");
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
if (client->irq <= 0) {
|
||||
dev_err(&client->dev, "No IRQ specified\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
priv = devm_kzalloc(&client->dev, sizeof(*priv), GFP_KERNEL);
|
||||
if (!priv)
|
||||
return -ENOMEM;
|
||||
|
||||
priv->client = client;
|
||||
i2c_set_clientdata(client, priv);
|
||||
fbdev = &priv->fbdev;
|
||||
keypad = &priv->keypad;
|
||||
|
||||
priv->workqueue = create_singlethread_workqueue(DRIVER_NAME "-wq");
|
||||
if (priv->workqueue == NULL)
|
||||
return -ENOMEM;
|
||||
|
||||
err = ht16k33_initialize(priv);
|
||||
if (err)
|
||||
goto err_destroy_wq;
|
||||
|
||||
/* Framebuffer (2 bytes per column) */
|
||||
BUILD_BUG_ON(PAGE_SIZE < HT16K33_FB_SIZE);
|
||||
fbdev->buffer = (unsigned char *) get_zeroed_page(GFP_KERNEL);
|
||||
if (!fbdev->buffer) {
|
||||
err = -ENOMEM;
|
||||
goto err_free_fbdev;
|
||||
}
|
||||
|
||||
fbdev->cache = devm_kmalloc(&client->dev, HT16K33_FB_SIZE, GFP_KERNEL);
|
||||
if (!fbdev->cache) {
|
||||
err = -ENOMEM;
|
||||
goto err_fbdev_buffer;
|
||||
}
|
||||
|
||||
fbdev->info = framebuffer_alloc(0, &client->dev);
|
||||
if (!fbdev->info) {
|
||||
err = -ENOMEM;
|
||||
goto err_fbdev_buffer;
|
||||
}
|
||||
|
||||
err = of_property_read_u32(node, "refresh-rate-hz",
|
||||
&fbdev->refresh_rate);
|
||||
if (err) {
|
||||
dev_err(&client->dev, "refresh rate not specified\n");
|
||||
goto err_fbdev_info;
|
||||
}
|
||||
fb_bl_default_curve(fbdev->info, 0, MIN_BRIGHTNESS, MAX_BRIGHTNESS);
|
||||
|
||||
INIT_DELAYED_WORK(&fbdev->work, ht16k33_fb_update);
|
||||
fbdev->info->fbops = &ht16k33_fb_ops;
|
||||
fbdev->info->screen_base = (char __iomem *) fbdev->buffer;
|
||||
fbdev->info->screen_size = HT16K33_FB_SIZE;
|
||||
fbdev->info->fix = ht16k33_fb_fix;
|
||||
fbdev->info->var = ht16k33_fb_var;
|
||||
fbdev->info->pseudo_palette = NULL;
|
||||
fbdev->info->flags = FBINFO_FLAG_DEFAULT;
|
||||
fbdev->info->par = priv;
|
||||
|
||||
err = register_framebuffer(fbdev->info);
|
||||
if (err)
|
||||
goto err_fbdev_info;
|
||||
|
||||
/* Keypad */
|
||||
keypad->dev = devm_input_allocate_device(&client->dev);
|
||||
if (!keypad->dev) {
|
||||
err = -ENOMEM;
|
||||
goto err_fbdev_unregister;
|
||||
}
|
||||
|
||||
keypad->dev->name = DRIVER_NAME"-keypad";
|
||||
keypad->dev->id.bustype = BUS_I2C;
|
||||
keypad->dev->open = ht16k33_keypad_start;
|
||||
keypad->dev->close = ht16k33_keypad_stop;
|
||||
|
||||
if (!of_get_property(node, "linux,no-autorepeat", NULL))
|
||||
__set_bit(EV_REP, keypad->dev->evbit);
|
||||
|
||||
err = of_property_read_u32(node, "debounce-delay-ms",
|
||||
&keypad->debounce_ms);
|
||||
if (err) {
|
||||
dev_err(&client->dev, "key debounce delay not specified\n");
|
||||
goto err_fbdev_unregister;
|
||||
}
|
||||
|
||||
err = devm_request_threaded_irq(&client->dev, client->irq, NULL,
|
||||
ht16k33_irq_thread,
|
||||
IRQF_TRIGGER_RISING | IRQF_ONESHOT,
|
||||
DRIVER_NAME, priv);
|
||||
if (err) {
|
||||
dev_err(&client->dev, "irq request failed %d, error %d\n",
|
||||
client->irq, err);
|
||||
goto err_fbdev_unregister;
|
||||
}
|
||||
|
||||
disable_irq_nosync(client->irq);
|
||||
rows = HT16K33_MATRIX_KEYPAD_MAX_ROWS;
|
||||
cols = HT16K33_MATRIX_KEYPAD_MAX_COLS;
|
||||
err = matrix_keypad_parse_of_params(&client->dev, &rows, &cols);
|
||||
if (err)
|
||||
goto err_fbdev_unregister;
|
||||
|
||||
err = matrix_keypad_build_keymap(NULL, NULL, rows, cols, NULL,
|
||||
keypad->dev);
|
||||
if (err) {
|
||||
dev_err(&client->dev, "failed to build keymap\n");
|
||||
goto err_fbdev_unregister;
|
||||
}
|
||||
|
||||
input_set_drvdata(keypad->dev, priv);
|
||||
keypad->rows = rows;
|
||||
keypad->cols = cols;
|
||||
keypad->row_shift = get_count_order(cols);
|
||||
INIT_DELAYED_WORK(&keypad->work, ht16k33_keypad_scan);
|
||||
|
||||
err = input_register_device(keypad->dev);
|
||||
if (err)
|
||||
goto err_fbdev_unregister;
|
||||
|
||||
/* Backlight */
|
||||
memset(&bl_props, 0, sizeof(struct backlight_properties));
|
||||
bl_props.type = BACKLIGHT_RAW;
|
||||
bl_props.max_brightness = MAX_BRIGHTNESS;
|
||||
|
||||
bl = devm_backlight_device_register(&client->dev, DRIVER_NAME"-bl",
|
||||
&client->dev, priv,
|
||||
&ht16k33_bl_ops, &bl_props);
|
||||
if (IS_ERR(bl)) {
|
||||
dev_err(&client->dev, "failed to register backlight\n");
|
||||
err = PTR_ERR(bl);
|
||||
goto err_keypad_unregister;
|
||||
}
|
||||
|
||||
err = of_property_read_u32(node, "default-brightness-level",
|
||||
&dft_brightness);
|
||||
if (err) {
|
||||
dft_brightness = MAX_BRIGHTNESS;
|
||||
} else if (dft_brightness > MAX_BRIGHTNESS) {
|
||||
dev_warn(&client->dev,
|
||||
"invalid default brightness level: %u, using %u\n",
|
||||
dft_brightness, MAX_BRIGHTNESS);
|
||||
dft_brightness = MAX_BRIGHTNESS;
|
||||
}
|
||||
|
||||
bl->props.brightness = dft_brightness;
|
||||
ht16k33_bl_update_status(bl);
|
||||
|
||||
ht16k33_fb_queue(priv);
|
||||
return 0;
|
||||
|
||||
err_keypad_unregister:
|
||||
input_unregister_device(keypad->dev);
|
||||
err_fbdev_unregister:
|
||||
unregister_framebuffer(fbdev->info);
|
||||
err_fbdev_info:
|
||||
framebuffer_release(fbdev->info);
|
||||
err_fbdev_buffer:
|
||||
free_page((unsigned long) fbdev->buffer);
|
||||
err_free_fbdev:
|
||||
kfree(fbdev);
|
||||
err_destroy_wq:
|
||||
destroy_workqueue(priv->workqueue);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
static int ht16k33_remove(struct i2c_client *client)
|
||||
{
|
||||
struct ht16k33_priv *priv = i2c_get_clientdata(client);
|
||||
struct ht16k33_keypad *keypad = &priv->keypad;
|
||||
struct ht16k33_fbdev *fbdev = &priv->fbdev;
|
||||
|
||||
ht16k33_keypad_stop(keypad->dev);
|
||||
|
||||
cancel_delayed_work(&fbdev->work);
|
||||
unregister_framebuffer(fbdev->info);
|
||||
framebuffer_release(fbdev->info);
|
||||
free_page((unsigned long) fbdev->buffer);
|
||||
|
||||
destroy_workqueue(priv->workqueue);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct i2c_device_id ht16k33_i2c_match[] = {
|
||||
{ "ht16k33", 0 },
|
||||
{ }
|
||||
};
|
||||
MODULE_DEVICE_TABLE(i2c, ht16k33_i2c_match);
|
||||
|
||||
static const struct of_device_id ht16k33_of_match[] = {
|
||||
{ .compatible = "holtek,ht16k33", },
|
||||
{ }
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, ht16k33_of_match);
|
||||
|
||||
static struct i2c_driver ht16k33_driver = {
|
||||
.probe = ht16k33_probe,
|
||||
.remove = ht16k33_remove,
|
||||
.driver = {
|
||||
.name = DRIVER_NAME,
|
||||
.of_match_table = of_match_ptr(ht16k33_of_match),
|
||||
},
|
||||
.id_table = ht16k33_i2c_match,
|
||||
};
|
||||
module_i2c_driver(ht16k33_driver);
|
||||
|
||||
MODULE_DESCRIPTION("Holtek HT16K33 driver");
|
||||
MODULE_LICENSE("GPL");
|
||||
MODULE_AUTHOR("Robin van der Gracht <robin@protonic.nl>");
|
|
@ -17,7 +17,6 @@ config DEVMEM
|
|||
|
||||
config DEVKMEM
|
||||
bool "/dev/kmem virtual device support"
|
||||
default y
|
||||
help
|
||||
Say Y here if you want to support the /dev/kmem device. The
|
||||
/dev/kmem device is rarely used, but can be used for certain
|
||||
|
@ -579,7 +578,7 @@ config DEVPORT
|
|||
source "drivers/s390/char/Kconfig"
|
||||
|
||||
config TILE_SROM
|
||||
bool "Character-device access via hypervisor to the Tilera SPI ROM"
|
||||
tristate "Character-device access via hypervisor to the Tilera SPI ROM"
|
||||
depends on TILE
|
||||
default y
|
||||
---help---
|
||||
|
|
|
@ -43,6 +43,17 @@ config CARDMAN_4040
|
|||
(http://www.omnikey.com/), or a current development version of OpenCT
|
||||
(http://www.opensc-project.org/opensc).
|
||||
|
||||
config SCR24X
|
||||
tristate "SCR24x Chip Card Interface support"
|
||||
depends on PCMCIA
|
||||
help
|
||||
Enable support for the SCR24x PCMCIA Chip Card Interface.
|
||||
|
||||
To compile this driver as a module, choose M here.
|
||||
The module will be called scr24x_cs..
|
||||
|
||||
If unsure say N.
|
||||
|
||||
config IPWIRELESS
|
||||
tristate "IPWireless 3G UMTS PCMCIA card support"
|
||||
depends on PCMCIA && NETDEVICES && TTY
|
||||
|
|
|
@ -7,3 +7,4 @@
|
|||
obj-$(CONFIG_SYNCLINK_CS) += synclink_cs.o
|
||||
obj-$(CONFIG_CARDMAN_4000) += cm4000_cs.o
|
||||
obj-$(CONFIG_CARDMAN_4040) += cm4040_cs.o
|
||||
obj-$(CONFIG_SCR24X) += scr24x_cs.o
|
||||
|
|
|
@ -0,0 +1,373 @@
|
|||
/*
|
||||
* SCR24x PCMCIA Smart Card Reader Driver
|
||||
*
|
||||
* Copyright (C) 2005-2006 TL Sudheendran
|
||||
* Copyright (C) 2016 Lubomir Rintel
|
||||
*
|
||||
* Derived from "scr24x_v4.2.6_Release.tar.gz" driver by TL Sudheendran.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License as published by
|
||||
* the Free Software Foundation; either version 2, or (at your option)
|
||||
* any later version.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful,
|
||||
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License
|
||||
* along with this program; see the file COPYING. If not, write to
|
||||
* the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA.
|
||||
*/
|
||||
|
||||
#include <linux/device.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/delay.h>
|
||||
#include <linux/cdev.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/fs.h>
|
||||
#include <linux/io.h>
|
||||
#include <linux/uaccess.h>
|
||||
|
||||
#include <pcmcia/cistpl.h>
|
||||
#include <pcmcia/ds.h>
|
||||
|
||||
#define CCID_HEADER_SIZE 10
|
||||
#define CCID_LENGTH_OFFSET 1
|
||||
#define CCID_MAX_LEN 271
|
||||
|
||||
#define SCR24X_DATA(n) (1 + n)
|
||||
#define SCR24X_CMD_STATUS 7
|
||||
#define CMD_START 0x40
|
||||
#define CMD_WRITE_BYTE 0x41
|
||||
#define CMD_READ_BYTE 0x42
|
||||
#define STATUS_BUSY 0x80
|
||||
|
||||
struct scr24x_dev {
|
||||
struct device *dev;
|
||||
struct cdev c_dev;
|
||||
unsigned char buf[CCID_MAX_LEN];
|
||||
int devno;
|
||||
struct mutex lock;
|
||||
struct kref refcnt;
|
||||
u8 __iomem *regs;
|
||||
};
|
||||
|
||||
#define SCR24X_DEVS 8
|
||||
static DECLARE_BITMAP(scr24x_minors, SCR24X_DEVS);
|
||||
|
||||
static struct class *scr24x_class;
|
||||
static dev_t scr24x_devt;
|
||||
|
||||
static void scr24x_delete(struct kref *kref)
|
||||
{
|
||||
struct scr24x_dev *dev = container_of(kref, struct scr24x_dev,
|
||||
refcnt);
|
||||
|
||||
kfree(dev);
|
||||
}
|
||||
|
||||
static int scr24x_wait_ready(struct scr24x_dev *dev)
|
||||
{
|
||||
u_char status;
|
||||
int timeout = 100;
|
||||
|
||||
do {
|
||||
status = ioread8(dev->regs + SCR24X_CMD_STATUS);
|
||||
if (!(status & STATUS_BUSY))
|
||||
return 0;
|
||||
|
||||
msleep(20);
|
||||
} while (--timeout);
|
||||
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
static int scr24x_open(struct inode *inode, struct file *filp)
|
||||
{
|
||||
struct scr24x_dev *dev = container_of(inode->i_cdev,
|
||||
struct scr24x_dev, c_dev);
|
||||
|
||||
kref_get(&dev->refcnt);
|
||||
filp->private_data = dev;
|
||||
|
||||
return nonseekable_open(inode, filp);
|
||||
}
|
||||
|
||||
static int scr24x_release(struct inode *inode, struct file *filp)
|
||||
{
|
||||
struct scr24x_dev *dev = filp->private_data;
|
||||
|
||||
/* We must not take the dev->lock here as scr24x_delete()
|
||||
* might be called to remove the dev structure altogether.
|
||||
* We don't need the lock anyway, since after the reference
|
||||
* acquired in probe() is released in remove() the chrdev
|
||||
* is already unregistered and noone can possibly acquire
|
||||
* a reference via open() anymore. */
|
||||
kref_put(&dev->refcnt, scr24x_delete);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int read_chunk(struct scr24x_dev *dev, size_t offset, size_t limit)
|
||||
{
|
||||
size_t i, y;
|
||||
int ret;
|
||||
|
||||
for (i = offset; i < limit; i += 5) {
|
||||
iowrite8(CMD_READ_BYTE, dev->regs + SCR24X_CMD_STATUS);
|
||||
ret = scr24x_wait_ready(dev);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
for (y = 0; y < 5 && i + y < limit; y++)
|
||||
dev->buf[i + y] = ioread8(dev->regs + SCR24X_DATA(y));
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static ssize_t scr24x_read(struct file *filp, char __user *buf, size_t count,
|
||||
loff_t *ppos)
|
||||
{
|
||||
struct scr24x_dev *dev = filp->private_data;
|
||||
int ret;
|
||||
int len;
|
||||
|
||||
if (count < CCID_HEADER_SIZE)
|
||||
return -EINVAL;
|
||||
|
||||
if (mutex_lock_interruptible(&dev->lock))
|
||||
return -ERESTARTSYS;
|
||||
|
||||
if (!dev->dev) {
|
||||
ret = -ENODEV;
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = scr24x_wait_ready(dev);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
len = CCID_HEADER_SIZE;
|
||||
ret = read_chunk(dev, 0, len);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
len += le32_to_cpu(*(__le32 *)(&dev->buf[CCID_LENGTH_OFFSET]));
|
||||
if (len > sizeof(dev->buf)) {
|
||||
ret = -EIO;
|
||||
goto out;
|
||||
}
|
||||
ret = read_chunk(dev, CCID_HEADER_SIZE, len);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
if (len < count)
|
||||
count = len;
|
||||
|
||||
if (copy_to_user(buf, dev->buf, count)) {
|
||||
ret = -EFAULT;
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = count;
|
||||
out:
|
||||
mutex_unlock(&dev->lock);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static ssize_t scr24x_write(struct file *filp, const char __user *buf,
|
||||
size_t count, loff_t *ppos)
|
||||
{
|
||||
struct scr24x_dev *dev = filp->private_data;
|
||||
size_t i, y;
|
||||
int ret;
|
||||
|
||||
if (mutex_lock_interruptible(&dev->lock))
|
||||
return -ERESTARTSYS;
|
||||
|
||||
if (!dev->dev) {
|
||||
ret = -ENODEV;
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (count > sizeof(dev->buf)) {
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (copy_from_user(dev->buf, buf, count)) {
|
||||
ret = -EFAULT;
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = scr24x_wait_ready(dev);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
iowrite8(CMD_START, dev->regs + SCR24X_CMD_STATUS);
|
||||
ret = scr24x_wait_ready(dev);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
|
||||
for (i = 0; i < count; i += 5) {
|
||||
for (y = 0; y < 5 && i + y < count; y++)
|
||||
iowrite8(dev->buf[i + y], dev->regs + SCR24X_DATA(y));
|
||||
|
||||
iowrite8(CMD_WRITE_BYTE, dev->regs + SCR24X_CMD_STATUS);
|
||||
ret = scr24x_wait_ready(dev);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
}
|
||||
|
||||
ret = count;
|
||||
out:
|
||||
mutex_unlock(&dev->lock);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static const struct file_operations scr24x_fops = {
|
||||
.owner = THIS_MODULE,
|
||||
.read = scr24x_read,
|
||||
.write = scr24x_write,
|
||||
.open = scr24x_open,
|
||||
.release = scr24x_release,
|
||||
.llseek = no_llseek,
|
||||
};
|
||||
|
||||
static int scr24x_config_check(struct pcmcia_device *link, void *priv_data)
|
||||
{
|
||||
if (resource_size(link->resource[PCMCIA_IOPORT_0]) != 0x11)
|
||||
return -ENODEV;
|
||||
return pcmcia_request_io(link);
|
||||
}
|
||||
|
||||
static int scr24x_probe(struct pcmcia_device *link)
|
||||
{
|
||||
struct scr24x_dev *dev;
|
||||
int ret;
|
||||
|
||||
dev = kzalloc(sizeof(*dev), GFP_KERNEL);
|
||||
if (!dev)
|
||||
return -ENOMEM;
|
||||
|
||||
dev->devno = find_first_zero_bit(scr24x_minors, SCR24X_DEVS);
|
||||
if (dev->devno >= SCR24X_DEVS) {
|
||||
ret = -EBUSY;
|
||||
goto err;
|
||||
}
|
||||
|
||||
mutex_init(&dev->lock);
|
||||
kref_init(&dev->refcnt);
|
||||
|
||||
link->priv = dev;
|
||||
link->config_flags |= CONF_ENABLE_IRQ | CONF_AUTO_SET_IO;
|
||||
|
||||
ret = pcmcia_loop_config(link, scr24x_config_check, NULL);
|
||||
if (ret < 0)
|
||||
goto err;
|
||||
|
||||
dev->dev = &link->dev;
|
||||
dev->regs = devm_ioport_map(&link->dev,
|
||||
link->resource[PCMCIA_IOPORT_0]->start,
|
||||
resource_size(link->resource[PCMCIA_IOPORT_0]));
|
||||
if (!dev->regs) {
|
||||
ret = -EIO;
|
||||
goto err;
|
||||
}
|
||||
|
||||
cdev_init(&dev->c_dev, &scr24x_fops);
|
||||
dev->c_dev.owner = THIS_MODULE;
|
||||
dev->c_dev.ops = &scr24x_fops;
|
||||
ret = cdev_add(&dev->c_dev, MKDEV(MAJOR(scr24x_devt), dev->devno), 1);
|
||||
if (ret < 0)
|
||||
goto err;
|
||||
|
||||
ret = pcmcia_enable_device(link);
|
||||
if (ret < 0) {
|
||||
pcmcia_disable_device(link);
|
||||
goto err;
|
||||
}
|
||||
|
||||
device_create(scr24x_class, NULL, MKDEV(MAJOR(scr24x_devt), dev->devno),
|
||||
NULL, "scr24x%d", dev->devno);
|
||||
|
||||
dev_info(&link->dev, "SCR24x Chip Card Interface\n");
|
||||
return 0;
|
||||
|
||||
err:
|
||||
if (dev->devno < SCR24X_DEVS)
|
||||
clear_bit(dev->devno, scr24x_minors);
|
||||
kfree (dev);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void scr24x_remove(struct pcmcia_device *link)
|
||||
{
|
||||
struct scr24x_dev *dev = (struct scr24x_dev *)link->priv;
|
||||
|
||||
device_destroy(scr24x_class, MKDEV(MAJOR(scr24x_devt), dev->devno));
|
||||
mutex_lock(&dev->lock);
|
||||
pcmcia_disable_device(link);
|
||||
cdev_del(&dev->c_dev);
|
||||
clear_bit(dev->devno, scr24x_minors);
|
||||
dev->dev = NULL;
|
||||
mutex_unlock(&dev->lock);
|
||||
|
||||
kref_put(&dev->refcnt, scr24x_delete);
|
||||
}
|
||||
|
||||
static const struct pcmcia_device_id scr24x_ids[] = {
|
||||
PCMCIA_DEVICE_PROD_ID12("HP", "PC Card Smart Card Reader",
|
||||
0x53cb94f9, 0xbfdf89a5),
|
||||
PCMCIA_DEVICE_PROD_ID1("SCR241 PCMCIA", 0x6271efa3),
|
||||
PCMCIA_DEVICE_PROD_ID1("SCR243 PCMCIA", 0x2054e8de),
|
||||
PCMCIA_DEVICE_PROD_ID1("SCR24x PCMCIA", 0x54a33665),
|
||||
PCMCIA_DEVICE_NULL
|
||||
};
|
||||
MODULE_DEVICE_TABLE(pcmcia, scr24x_ids);
|
||||
|
||||
static struct pcmcia_driver scr24x_driver = {
|
||||
.owner = THIS_MODULE,
|
||||
.name = "scr24x_cs",
|
||||
.probe = scr24x_probe,
|
||||
.remove = scr24x_remove,
|
||||
.id_table = scr24x_ids,
|
||||
};
|
||||
|
||||
static int __init scr24x_init(void)
|
||||
{
|
||||
int ret;
|
||||
|
||||
scr24x_class = class_create(THIS_MODULE, "scr24x");
|
||||
if (IS_ERR(scr24x_class))
|
||||
return PTR_ERR(scr24x_class);
|
||||
|
||||
ret = alloc_chrdev_region(&scr24x_devt, 0, SCR24X_DEVS, "scr24x");
|
||||
if (ret < 0) {
|
||||
class_destroy(scr24x_class);
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = pcmcia_register_driver(&scr24x_driver);
|
||||
if (ret < 0) {
|
||||
unregister_chrdev_region(scr24x_devt, SCR24X_DEVS);
|
||||
class_destroy(scr24x_class);
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void __exit scr24x_exit(void)
|
||||
{
|
||||
pcmcia_unregister_driver(&scr24x_driver);
|
||||
unregister_chrdev_region(scr24x_devt, SCR24X_DEVS);
|
||||
class_destroy(scr24x_class);
|
||||
}
|
||||
|
||||
module_init(scr24x_init);
|
||||
module_exit(scr24x_exit);
|
||||
|
||||
MODULE_AUTHOR("Lubomir Rintel");
|
||||
MODULE_DESCRIPTION("SCR24x PCMCIA Smart Card Reader Driver");
|
||||
MODULE_LICENSE("GPL");
|
|
@ -86,6 +86,9 @@ struct pp_struct {
|
|||
long default_inactivity;
|
||||
};
|
||||
|
||||
/* should we use PARDEVICE_MAX here? */
|
||||
static struct device *devices[PARPORT_MAX];
|
||||
|
||||
/* pp_struct.flags bitfields */
|
||||
#define PP_CLAIMED (1<<0)
|
||||
#define PP_EXCL (1<<1)
|
||||
|
@ -294,7 +297,7 @@ static int register_device(int minor, struct pp_struct *pp)
|
|||
|
||||
port = parport_find_number(minor);
|
||||
if (!port) {
|
||||
printk(KERN_WARNING "%s: no associated port!\n", name);
|
||||
pr_warn("%s: no associated port!\n", name);
|
||||
kfree(name);
|
||||
return -ENXIO;
|
||||
}
|
||||
|
@ -305,10 +308,10 @@ static int register_device(int minor, struct pp_struct *pp)
|
|||
ppdev_cb.private = pp;
|
||||
pdev = parport_register_dev_model(port, name, &ppdev_cb, minor);
|
||||
parport_put_port(port);
|
||||
kfree(name);
|
||||
|
||||
if (!pdev) {
|
||||
printk(KERN_WARNING "%s: failed to register device!\n", name);
|
||||
kfree(name);
|
||||
pr_warn("%s: failed to register device!\n", name);
|
||||
return -ENXIO;
|
||||
}
|
||||
|
||||
|
@ -789,13 +792,29 @@ static const struct file_operations pp_fops = {
|
|||
|
||||
static void pp_attach(struct parport *port)
|
||||
{
|
||||
device_create(ppdev_class, port->dev, MKDEV(PP_MAJOR, port->number),
|
||||
NULL, "parport%d", port->number);
|
||||
struct device *ret;
|
||||
|
||||
if (devices[port->number])
|
||||
return;
|
||||
|
||||
ret = device_create(ppdev_class, port->dev,
|
||||
MKDEV(PP_MAJOR, port->number), NULL,
|
||||
"parport%d", port->number);
|
||||
if (IS_ERR(ret)) {
|
||||
pr_err("Failed to create device parport%d\n",
|
||||
port->number);
|
||||
return;
|
||||
}
|
||||
devices[port->number] = ret;
|
||||
}
|
||||
|
||||
static void pp_detach(struct parport *port)
|
||||
{
|
||||
if (!devices[port->number])
|
||||
return;
|
||||
|
||||
device_destroy(ppdev_class, MKDEV(PP_MAJOR, port->number));
|
||||
devices[port->number] = NULL;
|
||||
}
|
||||
|
||||
static int pp_probe(struct pardevice *par_dev)
|
||||
|
@ -822,8 +841,7 @@ static int __init ppdev_init(void)
|
|||
int err = 0;
|
||||
|
||||
if (register_chrdev(PP_MAJOR, CHRDEV, &pp_fops)) {
|
||||
printk(KERN_WARNING CHRDEV ": unable to get major %d\n",
|
||||
PP_MAJOR);
|
||||
pr_warn(CHRDEV ": unable to get major %d\n", PP_MAJOR);
|
||||
return -EIO;
|
||||
}
|
||||
ppdev_class = class_create(THIS_MODULE, CHRDEV);
|
||||
|
@ -833,11 +851,11 @@ static int __init ppdev_init(void)
|
|||
}
|
||||
err = parport_register_driver(&pp_driver);
|
||||
if (err < 0) {
|
||||
printk(KERN_WARNING CHRDEV ": unable to register with parport\n");
|
||||
pr_warn(CHRDEV ": unable to register with parport\n");
|
||||
goto out_class;
|
||||
}
|
||||
|
||||
printk(KERN_INFO PP_VERSION "\n");
|
||||
pr_info(PP_VERSION "\n");
|
||||
goto out;
|
||||
|
||||
out_class:
|
||||
|
|
|
@ -285,7 +285,7 @@ scdrv_write(struct file *file, const char __user *buf,
|
|||
DECLARE_WAITQUEUE(wait, current);
|
||||
|
||||
if (file->f_flags & O_NONBLOCK) {
|
||||
spin_unlock(&sd->sd_wlock);
|
||||
spin_unlock_irqrestore(&sd->sd_wlock, flags);
|
||||
up(&sd->sd_wbs);
|
||||
return -EAGAIN;
|
||||
}
|
||||
|
|
|
@ -312,7 +312,8 @@ ATTRIBUTE_GROUPS(srom_dev);
|
|||
|
||||
static char *srom_devnode(struct device *dev, umode_t *mode)
|
||||
{
|
||||
*mode = S_IRUGO | S_IWUSR;
|
||||
if (mode)
|
||||
*mode = 0644;
|
||||
return kasprintf(GFP_KERNEL, "srom/%s", dev_name(dev));
|
||||
}
|
||||
|
||||
|
|
|
@ -13,12 +13,26 @@ config FPGA
|
|||
|
||||
if FPGA
|
||||
|
||||
config FPGA_REGION
|
||||
tristate "FPGA Region"
|
||||
depends on OF && FPGA_BRIDGE
|
||||
help
|
||||
FPGA Regions allow loading FPGA images under control of
|
||||
the Device Tree.
|
||||
|
||||
config FPGA_MGR_SOCFPGA
|
||||
tristate "Altera SOCFPGA FPGA Manager"
|
||||
depends on ARCH_SOCFPGA
|
||||
depends on ARCH_SOCFPGA || COMPILE_TEST
|
||||
help
|
||||
FPGA manager driver support for Altera SOCFPGA.
|
||||
|
||||
config FPGA_MGR_SOCFPGA_A10
|
||||
tristate "Altera SoCFPGA Arria10"
|
||||
depends on ARCH_SOCFPGA || COMPILE_TEST
|
||||
select REGMAP_MMIO
|
||||
help
|
||||
FPGA manager driver support for Altera Arria10 SoCFPGA.
|
||||
|
||||
config FPGA_MGR_ZYNQ_FPGA
|
||||
tristate "Xilinx Zynq FPGA"
|
||||
depends on ARCH_ZYNQ || COMPILE_TEST
|
||||
|
@ -26,6 +40,29 @@ config FPGA_MGR_ZYNQ_FPGA
|
|||
help
|
||||
FPGA manager driver support for Xilinx Zynq FPGAs.
|
||||
|
||||
config FPGA_BRIDGE
|
||||
tristate "FPGA Bridge Framework"
|
||||
depends on OF
|
||||
help
|
||||
Say Y here if you want to support bridges connected between host
|
||||
processors and FPGAs or between FPGAs.
|
||||
|
||||
config SOCFPGA_FPGA_BRIDGE
|
||||
tristate "Altera SoCFPGA FPGA Bridges"
|
||||
depends on ARCH_SOCFPGA && FPGA_BRIDGE
|
||||
help
|
||||
Say Y to enable drivers for FPGA bridges for Altera SOCFPGA
|
||||
devices.
|
||||
|
||||
config ALTERA_FREEZE_BRIDGE
|
||||
tristate "Altera FPGA Freeze Bridge"
|
||||
depends on ARCH_SOCFPGA && FPGA_BRIDGE
|
||||
help
|
||||
Say Y to enable drivers for Altera FPGA Freeze bridges. A
|
||||
freeze bridge is a bridge that exists in the FPGA fabric to
|
||||
isolate one region of the FPGA from the busses while that
|
||||
region is being reprogrammed.
|
||||
|
||||
endif # FPGA
|
||||
|
||||
endmenu
|
||||
|
|
|
@ -7,4 +7,13 @@ obj-$(CONFIG_FPGA) += fpga-mgr.o
|
|||
|
||||
# FPGA Manager Drivers
|
||||
obj-$(CONFIG_FPGA_MGR_SOCFPGA) += socfpga.o
|
||||
obj-$(CONFIG_FPGA_MGR_SOCFPGA_A10) += socfpga-a10.o
|
||||
obj-$(CONFIG_FPGA_MGR_ZYNQ_FPGA) += zynq-fpga.o
|
||||
|
||||
# FPGA Bridge Drivers
|
||||
obj-$(CONFIG_FPGA_BRIDGE) += fpga-bridge.o
|
||||
obj-$(CONFIG_SOCFPGA_FPGA_BRIDGE) += altera-hps2fpga.o altera-fpga2sdram.o
|
||||
obj-$(CONFIG_ALTERA_FREEZE_BRIDGE) += altera-freeze-bridge.o
|
||||
|
||||
# High Level Interfaces
|
||||
obj-$(CONFIG_FPGA_REGION) += fpga-region.o
|
||||
|
|
|
@ -0,0 +1,180 @@
|
|||
/*
|
||||
* FPGA to SDRAM Bridge Driver for Altera SoCFPGA Devices
|
||||
*
|
||||
* Copyright (C) 2013-2016 Altera Corporation, All Rights Reserved.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify it
|
||||
* under the terms and conditions of the GNU General Public License,
|
||||
* version 2, as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope it will be useful, but WITHOUT
|
||||
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
|
||||
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
|
||||
* more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License along with
|
||||
* this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
|
||||
/*
|
||||
* This driver manages a bridge between an FPGA and the SDRAM used by the ARM
|
||||
* host processor system (HPS).
|
||||
*
|
||||
* The bridge contains 4 read ports, 4 write ports, and 6 command ports.
|
||||
* Reconfiguring these ports requires that no SDRAM transactions occur during
|
||||
* reconfiguration. The code reconfiguring the ports cannot run out of SDRAM
|
||||
* nor can the FPGA access the SDRAM during reconfiguration. This driver does
|
||||
* not support reconfiguring the ports. The ports are configured by code
|
||||
* running out of on chip ram before Linux is started and the configuration
|
||||
* is passed in a handoff register in the system manager.
|
||||
*
|
||||
* This driver supports enabling and disabling of the configured ports, which
|
||||
* allows for safe reprogramming of the FPGA, assuming that the new FPGA image
|
||||
* uses the same port configuration. Bridges must be disabled before
|
||||
* reprogramming the FPGA and re-enabled after the FPGA has been programmed.
|
||||
*/
|
||||
|
||||
#include <linux/fpga/fpga-bridge.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/mfd/syscon.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/of_platform.h>
|
||||
#include <linux/regmap.h>
|
||||
|
||||
#define ALT_SDR_CTL_FPGAPORTRST_OFST 0x80
|
||||
#define ALT_SDR_CTL_FPGAPORTRST_PORTRSTN_MSK 0x00003fff
|
||||
#define ALT_SDR_CTL_FPGAPORTRST_RD_SHIFT 0
|
||||
#define ALT_SDR_CTL_FPGAPORTRST_WR_SHIFT 4
|
||||
#define ALT_SDR_CTL_FPGAPORTRST_CTRL_SHIFT 8
|
||||
|
||||
/*
|
||||
* From the Cyclone V HPS Memory Map document:
|
||||
* These registers are used to store handoff information between the
|
||||
* preloader and the OS. These 8 registers can be used to store any
|
||||
* information. The contents of these registers have no impact on
|
||||
* the state of the HPS hardware.
|
||||
*/
|
||||
#define SYSMGR_ISWGRP_HANDOFF3 (0x8C)
|
||||
|
||||
#define F2S_BRIDGE_NAME "fpga2sdram"
|
||||
|
||||
struct alt_fpga2sdram_data {
|
||||
struct device *dev;
|
||||
struct regmap *sdrctl;
|
||||
int mask;
|
||||
};
|
||||
|
||||
static int alt_fpga2sdram_enable_show(struct fpga_bridge *bridge)
|
||||
{
|
||||
struct alt_fpga2sdram_data *priv = bridge->priv;
|
||||
int value;
|
||||
|
||||
regmap_read(priv->sdrctl, ALT_SDR_CTL_FPGAPORTRST_OFST, &value);
|
||||
|
||||
return (value & priv->mask) == priv->mask;
|
||||
}
|
||||
|
||||
static inline int _alt_fpga2sdram_enable_set(struct alt_fpga2sdram_data *priv,
|
||||
bool enable)
|
||||
{
|
||||
return regmap_update_bits(priv->sdrctl, ALT_SDR_CTL_FPGAPORTRST_OFST,
|
||||
priv->mask, enable ? priv->mask : 0);
|
||||
}
|
||||
|
||||
static int alt_fpga2sdram_enable_set(struct fpga_bridge *bridge, bool enable)
|
||||
{
|
||||
return _alt_fpga2sdram_enable_set(bridge->priv, enable);
|
||||
}
|
||||
|
||||
struct prop_map {
|
||||
char *prop_name;
|
||||
u32 *prop_value;
|
||||
u32 prop_max;
|
||||
};
|
||||
|
||||
static const struct fpga_bridge_ops altera_fpga2sdram_br_ops = {
|
||||
.enable_set = alt_fpga2sdram_enable_set,
|
||||
.enable_show = alt_fpga2sdram_enable_show,
|
||||
};
|
||||
|
||||
static const struct of_device_id altera_fpga_of_match[] = {
|
||||
{ .compatible = "altr,socfpga-fpga2sdram-bridge" },
|
||||
{},
|
||||
};
|
||||
|
||||
static int alt_fpga_bridge_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct device *dev = &pdev->dev;
|
||||
struct alt_fpga2sdram_data *priv;
|
||||
u32 enable;
|
||||
struct regmap *sysmgr;
|
||||
int ret = 0;
|
||||
|
||||
priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
|
||||
if (!priv)
|
||||
return -ENOMEM;
|
||||
|
||||
priv->dev = dev;
|
||||
|
||||
priv->sdrctl = syscon_regmap_lookup_by_compatible("altr,sdr-ctl");
|
||||
if (IS_ERR(priv->sdrctl)) {
|
||||
dev_err(dev, "regmap for altr,sdr-ctl lookup failed.\n");
|
||||
return PTR_ERR(priv->sdrctl);
|
||||
}
|
||||
|
||||
sysmgr = syscon_regmap_lookup_by_compatible("altr,sys-mgr");
|
||||
if (IS_ERR(sysmgr)) {
|
||||
dev_err(dev, "regmap for altr,sys-mgr lookup failed.\n");
|
||||
return PTR_ERR(sysmgr);
|
||||
}
|
||||
|
||||
/* Get f2s bridge configuration saved in handoff register */
|
||||
regmap_read(sysmgr, SYSMGR_ISWGRP_HANDOFF3, &priv->mask);
|
||||
|
||||
ret = fpga_bridge_register(dev, F2S_BRIDGE_NAME,
|
||||
&altera_fpga2sdram_br_ops, priv);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
dev_info(dev, "driver initialized with handoff %08x\n", priv->mask);
|
||||
|
||||
if (!of_property_read_u32(dev->of_node, "bridge-enable", &enable)) {
|
||||
if (enable > 1) {
|
||||
dev_warn(dev, "invalid bridge-enable %u > 1\n", enable);
|
||||
} else {
|
||||
dev_info(dev, "%s bridge\n",
|
||||
(enable ? "enabling" : "disabling"));
|
||||
ret = _alt_fpga2sdram_enable_set(priv, enable);
|
||||
if (ret) {
|
||||
fpga_bridge_unregister(&pdev->dev);
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int alt_fpga_bridge_remove(struct platform_device *pdev)
|
||||
{
|
||||
fpga_bridge_unregister(&pdev->dev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
MODULE_DEVICE_TABLE(of, altera_fpga_of_match);
|
||||
|
||||
static struct platform_driver altera_fpga_driver = {
|
||||
.probe = alt_fpga_bridge_probe,
|
||||
.remove = alt_fpga_bridge_remove,
|
||||
.driver = {
|
||||
.name = "altera_fpga2sdram_bridge",
|
||||
.of_match_table = of_match_ptr(altera_fpga_of_match),
|
||||
},
|
||||
};
|
||||
|
||||
module_platform_driver(altera_fpga_driver);
|
||||
|
||||
MODULE_DESCRIPTION("Altera SoCFPGA FPGA to SDRAM Bridge");
|
||||
MODULE_AUTHOR("Alan Tull <atull@opensource.altera.com>");
|
||||
MODULE_LICENSE("GPL v2");
|
|
@ -0,0 +1,273 @@
|
|||
/*
|
||||
* FPGA Freeze Bridge Controller
|
||||
*
|
||||
* Copyright (C) 2016 Altera Corporation. All rights reserved.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify it
|
||||
* under the terms and conditions of the GNU General Public License,
|
||||
* version 2, as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope it will be useful, but WITHOUT
|
||||
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
|
||||
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
|
||||
* more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License along with
|
||||
* this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#include <linux/delay.h>
|
||||
#include <linux/io.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/of_device.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/fpga/fpga-bridge.h>
|
||||
|
||||
#define FREEZE_CSR_STATUS_OFFSET 0
|
||||
#define FREEZE_CSR_CTRL_OFFSET 4
|
||||
#define FREEZE_CSR_ILLEGAL_REQ_OFFSET 8
|
||||
#define FREEZE_CSR_REG_VERSION 12
|
||||
|
||||
#define FREEZE_CSR_SUPPORTED_VERSION 2
|
||||
|
||||
#define FREEZE_CSR_STATUS_FREEZE_REQ_DONE BIT(0)
|
||||
#define FREEZE_CSR_STATUS_UNFREEZE_REQ_DONE BIT(1)
|
||||
|
||||
#define FREEZE_CSR_CTRL_FREEZE_REQ BIT(0)
|
||||
#define FREEZE_CSR_CTRL_RESET_REQ BIT(1)
|
||||
#define FREEZE_CSR_CTRL_UNFREEZE_REQ BIT(2)
|
||||
|
||||
#define FREEZE_BRIDGE_NAME "freeze"
|
||||
|
||||
struct altera_freeze_br_data {
|
||||
struct device *dev;
|
||||
void __iomem *base_addr;
|
||||
bool enable;
|
||||
};
|
||||
|
||||
/*
|
||||
* Poll status until status bit is set or we have a timeout.
|
||||
*/
|
||||
static int altera_freeze_br_req_ack(struct altera_freeze_br_data *priv,
|
||||
u32 timeout, u32 req_ack)
|
||||
{
|
||||
struct device *dev = priv->dev;
|
||||
void __iomem *csr_illegal_req_addr = priv->base_addr +
|
||||
FREEZE_CSR_ILLEGAL_REQ_OFFSET;
|
||||
u32 status, illegal, ctrl;
|
||||
int ret = -ETIMEDOUT;
|
||||
|
||||
do {
|
||||
illegal = readl(csr_illegal_req_addr);
|
||||
if (illegal) {
|
||||
dev_err(dev, "illegal request detected 0x%x", illegal);
|
||||
|
||||
writel(1, csr_illegal_req_addr);
|
||||
|
||||
illegal = readl(csr_illegal_req_addr);
|
||||
if (illegal)
|
||||
dev_err(dev, "illegal request not cleared 0x%x",
|
||||
illegal);
|
||||
|
||||
ret = -EINVAL;
|
||||
break;
|
||||
}
|
||||
|
||||
status = readl(priv->base_addr + FREEZE_CSR_STATUS_OFFSET);
|
||||
dev_dbg(dev, "%s %x %x\n", __func__, status, req_ack);
|
||||
status &= req_ack;
|
||||
if (status) {
|
||||
ctrl = readl(priv->base_addr + FREEZE_CSR_CTRL_OFFSET);
|
||||
dev_dbg(dev, "%s request %x acknowledged %x %x\n",
|
||||
__func__, req_ack, status, ctrl);
|
||||
ret = 0;
|
||||
break;
|
||||
}
|
||||
|
||||
udelay(1);
|
||||
} while (timeout--);
|
||||
|
||||
if (ret == -ETIMEDOUT)
|
||||
dev_err(dev, "%s timeout waiting for 0x%x\n",
|
||||
__func__, req_ack);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int altera_freeze_br_do_freeze(struct altera_freeze_br_data *priv,
|
||||
u32 timeout)
|
||||
{
|
||||
struct device *dev = priv->dev;
|
||||
void __iomem *csr_ctrl_addr = priv->base_addr +
|
||||
FREEZE_CSR_CTRL_OFFSET;
|
||||
u32 status;
|
||||
int ret;
|
||||
|
||||
status = readl(priv->base_addr + FREEZE_CSR_STATUS_OFFSET);
|
||||
|
||||
dev_dbg(dev, "%s %d %d\n", __func__, status, readl(csr_ctrl_addr));
|
||||
|
||||
if (status & FREEZE_CSR_STATUS_FREEZE_REQ_DONE) {
|
||||
dev_dbg(dev, "%s bridge already disabled %d\n",
|
||||
__func__, status);
|
||||
return 0;
|
||||
} else if (!(status & FREEZE_CSR_STATUS_UNFREEZE_REQ_DONE)) {
|
||||
dev_err(dev, "%s bridge not enabled %d\n", __func__, status);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
writel(FREEZE_CSR_CTRL_FREEZE_REQ, csr_ctrl_addr);
|
||||
|
||||
ret = altera_freeze_br_req_ack(priv, timeout,
|
||||
FREEZE_CSR_STATUS_FREEZE_REQ_DONE);
|
||||
|
||||
if (ret)
|
||||
writel(0, csr_ctrl_addr);
|
||||
else
|
||||
writel(FREEZE_CSR_CTRL_RESET_REQ, csr_ctrl_addr);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int altera_freeze_br_do_unfreeze(struct altera_freeze_br_data *priv,
|
||||
u32 timeout)
|
||||
{
|
||||
struct device *dev = priv->dev;
|
||||
void __iomem *csr_ctrl_addr = priv->base_addr +
|
||||
FREEZE_CSR_CTRL_OFFSET;
|
||||
u32 status;
|
||||
int ret;
|
||||
|
||||
writel(0, csr_ctrl_addr);
|
||||
|
||||
status = readl(priv->base_addr + FREEZE_CSR_STATUS_OFFSET);
|
||||
|
||||
dev_dbg(dev, "%s %d %d\n", __func__, status, readl(csr_ctrl_addr));
|
||||
|
||||
if (status & FREEZE_CSR_STATUS_UNFREEZE_REQ_DONE) {
|
||||
dev_dbg(dev, "%s bridge already enabled %d\n",
|
||||
__func__, status);
|
||||
return 0;
|
||||
} else if (!(status & FREEZE_CSR_STATUS_FREEZE_REQ_DONE)) {
|
||||
dev_err(dev, "%s bridge not frozen %d\n", __func__, status);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
writel(FREEZE_CSR_CTRL_UNFREEZE_REQ, csr_ctrl_addr);
|
||||
|
||||
ret = altera_freeze_br_req_ack(priv, timeout,
|
||||
FREEZE_CSR_STATUS_UNFREEZE_REQ_DONE);
|
||||
|
||||
status = readl(priv->base_addr + FREEZE_CSR_STATUS_OFFSET);
|
||||
|
||||
dev_dbg(dev, "%s %d %d\n", __func__, status, readl(csr_ctrl_addr));
|
||||
|
||||
writel(0, csr_ctrl_addr);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* enable = 1 : allow traffic through the bridge
|
||||
* enable = 0 : disable traffic through the bridge
|
||||
*/
|
||||
static int altera_freeze_br_enable_set(struct fpga_bridge *bridge,
|
||||
bool enable)
|
||||
{
|
||||
struct altera_freeze_br_data *priv = bridge->priv;
|
||||
struct fpga_image_info *info = bridge->info;
|
||||
u32 timeout = 0;
|
||||
int ret;
|
||||
|
||||
if (enable) {
|
||||
if (info)
|
||||
timeout = info->enable_timeout_us;
|
||||
|
||||
ret = altera_freeze_br_do_unfreeze(bridge->priv, timeout);
|
||||
} else {
|
||||
if (info)
|
||||
timeout = info->disable_timeout_us;
|
||||
|
||||
ret = altera_freeze_br_do_freeze(bridge->priv, timeout);
|
||||
}
|
||||
|
||||
if (!ret)
|
||||
priv->enable = enable;
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int altera_freeze_br_enable_show(struct fpga_bridge *bridge)
|
||||
{
|
||||
struct altera_freeze_br_data *priv = bridge->priv;
|
||||
|
||||
return priv->enable;
|
||||
}
|
||||
|
||||
static struct fpga_bridge_ops altera_freeze_br_br_ops = {
|
||||
.enable_set = altera_freeze_br_enable_set,
|
||||
.enable_show = altera_freeze_br_enable_show,
|
||||
};
|
||||
|
||||
static const struct of_device_id altera_freeze_br_of_match[] = {
|
||||
{ .compatible = "altr,freeze-bridge-controller", },
|
||||
{},
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, altera_freeze_br_of_match);
|
||||
|
||||
static int altera_freeze_br_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct device *dev = &pdev->dev;
|
||||
struct device_node *np = pdev->dev.of_node;
|
||||
struct altera_freeze_br_data *priv;
|
||||
struct resource *res;
|
||||
u32 status, revision;
|
||||
|
||||
if (!np)
|
||||
return -ENODEV;
|
||||
|
||||
priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
|
||||
if (!priv)
|
||||
return -ENOMEM;
|
||||
|
||||
priv->dev = dev;
|
||||
|
||||
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
|
||||
priv->base_addr = devm_ioremap_resource(dev, res);
|
||||
if (IS_ERR(priv->base_addr))
|
||||
return PTR_ERR(priv->base_addr);
|
||||
|
||||
status = readl(priv->base_addr + FREEZE_CSR_STATUS_OFFSET);
|
||||
if (status & FREEZE_CSR_STATUS_UNFREEZE_REQ_DONE)
|
||||
priv->enable = 1;
|
||||
|
||||
revision = readl(priv->base_addr + FREEZE_CSR_REG_VERSION);
|
||||
if (revision != FREEZE_CSR_SUPPORTED_VERSION)
|
||||
dev_warn(dev,
|
||||
"%s Freeze Controller unexpected revision %d != %d\n",
|
||||
__func__, revision, FREEZE_CSR_SUPPORTED_VERSION);
|
||||
|
||||
return fpga_bridge_register(dev, FREEZE_BRIDGE_NAME,
|
||||
&altera_freeze_br_br_ops, priv);
|
||||
}
|
||||
|
||||
static int altera_freeze_br_remove(struct platform_device *pdev)
|
||||
{
|
||||
fpga_bridge_unregister(&pdev->dev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct platform_driver altera_freeze_br_driver = {
|
||||
.probe = altera_freeze_br_probe,
|
||||
.remove = altera_freeze_br_remove,
|
||||
.driver = {
|
||||
.name = "altera_freeze_br",
|
||||
.of_match_table = of_match_ptr(altera_freeze_br_of_match),
|
||||
},
|
||||
};
|
||||
|
||||
module_platform_driver(altera_freeze_br_driver);
|
||||
|
||||
MODULE_DESCRIPTION("Altera Freeze Bridge");
|
||||
MODULE_AUTHOR("Alan Tull <atull@opensource.altera.com>");
|
||||
MODULE_LICENSE("GPL v2");
|
|
@ -0,0 +1,222 @@
|
|||
/*
|
||||
* FPGA to/from HPS Bridge Driver for Altera SoCFPGA Devices
|
||||
*
|
||||
* Copyright (C) 2013-2016 Altera Corporation, All Rights Reserved.
|
||||
*
|
||||
* Includes this patch from the mailing list:
|
||||
* fpga: altera-hps2fpga: fix HPS2FPGA bridge visibility to L3 masters
|
||||
* Signed-off-by: Anatolij Gustschin <agust@denx.de>
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify it
|
||||
* under the terms and conditions of the GNU General Public License,
|
||||
* version 2, as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope it will be useful, but WITHOUT
|
||||
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
|
||||
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
|
||||
* more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License along with
|
||||
* this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
|
||||
/*
|
||||
* This driver manages bridges on a Altera SOCFPGA between the ARM host
|
||||
* processor system (HPS) and the embedded FPGA.
|
||||
*
|
||||
* This driver supports enabling and disabling of the configured ports, which
|
||||
* allows for safe reprogramming of the FPGA, assuming that the new FPGA image
|
||||
* uses the same port configuration. Bridges must be disabled before
|
||||
* reprogramming the FPGA and re-enabled after the FPGA has been programmed.
|
||||
*/
|
||||
|
||||
#include <linux/clk.h>
|
||||
#include <linux/fpga/fpga-bridge.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/mfd/syscon.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/of_platform.h>
|
||||
#include <linux/regmap.h>
|
||||
#include <linux/reset.h>
|
||||
#include <linux/spinlock.h>
|
||||
|
||||
#define ALT_L3_REMAP_OFST 0x0
|
||||
#define ALT_L3_REMAP_MPUZERO_MSK 0x00000001
|
||||
#define ALT_L3_REMAP_H2F_MSK 0x00000008
|
||||
#define ALT_L3_REMAP_LWH2F_MSK 0x00000010
|
||||
|
||||
#define HPS2FPGA_BRIDGE_NAME "hps2fpga"
|
||||
#define LWHPS2FPGA_BRIDGE_NAME "lwhps2fpga"
|
||||
#define FPGA2HPS_BRIDGE_NAME "fpga2hps"
|
||||
|
||||
struct altera_hps2fpga_data {
|
||||
const char *name;
|
||||
struct reset_control *bridge_reset;
|
||||
struct regmap *l3reg;
|
||||
unsigned int remap_mask;
|
||||
struct clk *clk;
|
||||
};
|
||||
|
||||
static int alt_hps2fpga_enable_show(struct fpga_bridge *bridge)
|
||||
{
|
||||
struct altera_hps2fpga_data *priv = bridge->priv;
|
||||
|
||||
return reset_control_status(priv->bridge_reset);
|
||||
}
|
||||
|
||||
/* The L3 REMAP register is write only, so keep a cached value. */
|
||||
static unsigned int l3_remap_shadow;
|
||||
static spinlock_t l3_remap_lock;
|
||||
|
||||
static int _alt_hps2fpga_enable_set(struct altera_hps2fpga_data *priv,
|
||||
bool enable)
|
||||
{
|
||||
unsigned long flags;
|
||||
int ret;
|
||||
|
||||
/* bring bridge out of reset */
|
||||
if (enable)
|
||||
ret = reset_control_deassert(priv->bridge_reset);
|
||||
else
|
||||
ret = reset_control_assert(priv->bridge_reset);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/* Allow bridge to be visible to L3 masters or not */
|
||||
if (priv->remap_mask) {
|
||||
spin_lock_irqsave(&l3_remap_lock, flags);
|
||||
l3_remap_shadow |= ALT_L3_REMAP_MPUZERO_MSK;
|
||||
|
||||
if (enable)
|
||||
l3_remap_shadow |= priv->remap_mask;
|
||||
else
|
||||
l3_remap_shadow &= ~priv->remap_mask;
|
||||
|
||||
ret = regmap_write(priv->l3reg, ALT_L3_REMAP_OFST,
|
||||
l3_remap_shadow);
|
||||
spin_unlock_irqrestore(&l3_remap_lock, flags);
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int alt_hps2fpga_enable_set(struct fpga_bridge *bridge, bool enable)
|
||||
{
|
||||
return _alt_hps2fpga_enable_set(bridge->priv, enable);
|
||||
}
|
||||
|
||||
static const struct fpga_bridge_ops altera_hps2fpga_br_ops = {
|
||||
.enable_set = alt_hps2fpga_enable_set,
|
||||
.enable_show = alt_hps2fpga_enable_show,
|
||||
};
|
||||
|
||||
static struct altera_hps2fpga_data hps2fpga_data = {
|
||||
.name = HPS2FPGA_BRIDGE_NAME,
|
||||
.remap_mask = ALT_L3_REMAP_H2F_MSK,
|
||||
};
|
||||
|
||||
static struct altera_hps2fpga_data lwhps2fpga_data = {
|
||||
.name = LWHPS2FPGA_BRIDGE_NAME,
|
||||
.remap_mask = ALT_L3_REMAP_LWH2F_MSK,
|
||||
};
|
||||
|
||||
static struct altera_hps2fpga_data fpga2hps_data = {
|
||||
.name = FPGA2HPS_BRIDGE_NAME,
|
||||
};
|
||||
|
||||
static const struct of_device_id altera_fpga_of_match[] = {
|
||||
{ .compatible = "altr,socfpga-hps2fpga-bridge",
|
||||
.data = &hps2fpga_data },
|
||||
{ .compatible = "altr,socfpga-lwhps2fpga-bridge",
|
||||
.data = &lwhps2fpga_data },
|
||||
{ .compatible = "altr,socfpga-fpga2hps-bridge",
|
||||
.data = &fpga2hps_data },
|
||||
{},
|
||||
};
|
||||
|
||||
static int alt_fpga_bridge_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct device *dev = &pdev->dev;
|
||||
struct altera_hps2fpga_data *priv;
|
||||
const struct of_device_id *of_id;
|
||||
u32 enable;
|
||||
int ret;
|
||||
|
||||
of_id = of_match_device(altera_fpga_of_match, dev);
|
||||
priv = (struct altera_hps2fpga_data *)of_id->data;
|
||||
|
||||
priv->bridge_reset = of_reset_control_get_by_index(dev->of_node, 0);
|
||||
if (IS_ERR(priv->bridge_reset)) {
|
||||
dev_err(dev, "Could not get %s reset control\n", priv->name);
|
||||
return PTR_ERR(priv->bridge_reset);
|
||||
}
|
||||
|
||||
if (priv->remap_mask) {
|
||||
priv->l3reg = syscon_regmap_lookup_by_compatible("altr,l3regs");
|
||||
if (IS_ERR(priv->l3reg)) {
|
||||
dev_err(dev, "regmap for altr,l3regs lookup failed\n");
|
||||
return PTR_ERR(priv->l3reg);
|
||||
}
|
||||
}
|
||||
|
||||
priv->clk = devm_clk_get(dev, NULL);
|
||||
if (IS_ERR(priv->clk)) {
|
||||
dev_err(dev, "no clock specified\n");
|
||||
return PTR_ERR(priv->clk);
|
||||
}
|
||||
|
||||
ret = clk_prepare_enable(priv->clk);
|
||||
if (ret) {
|
||||
dev_err(dev, "could not enable clock\n");
|
||||
return -EBUSY;
|
||||
}
|
||||
|
||||
spin_lock_init(&l3_remap_lock);
|
||||
|
||||
if (!of_property_read_u32(dev->of_node, "bridge-enable", &enable)) {
|
||||
if (enable > 1) {
|
||||
dev_warn(dev, "invalid bridge-enable %u > 1\n", enable);
|
||||
} else {
|
||||
dev_info(dev, "%s bridge\n",
|
||||
(enable ? "enabling" : "disabling"));
|
||||
|
||||
ret = _alt_hps2fpga_enable_set(priv, enable);
|
||||
if (ret) {
|
||||
fpga_bridge_unregister(&pdev->dev);
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return fpga_bridge_register(dev, priv->name, &altera_hps2fpga_br_ops,
|
||||
priv);
|
||||
}
|
||||
|
||||
static int alt_fpga_bridge_remove(struct platform_device *pdev)
|
||||
{
|
||||
struct fpga_bridge *bridge = platform_get_drvdata(pdev);
|
||||
struct altera_hps2fpga_data *priv = bridge->priv;
|
||||
|
||||
fpga_bridge_unregister(&pdev->dev);
|
||||
|
||||
clk_disable_unprepare(priv->clk);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
MODULE_DEVICE_TABLE(of, altera_fpga_of_match);
|
||||
|
||||
static struct platform_driver alt_fpga_bridge_driver = {
|
||||
.probe = alt_fpga_bridge_probe,
|
||||
.remove = alt_fpga_bridge_remove,
|
||||
.driver = {
|
||||
.name = "altera_hps2fpga_bridge",
|
||||
.of_match_table = of_match_ptr(altera_fpga_of_match),
|
||||
},
|
||||
};
|
||||
|
||||
module_platform_driver(alt_fpga_bridge_driver);
|
||||
|
||||
MODULE_DESCRIPTION("Altera SoCFPGA HPS to FPGA Bridge");
|
||||
MODULE_AUTHOR("Alan Tull <atull@opensource.altera.com>");
|
||||
MODULE_LICENSE("GPL v2");
|
|
@ -0,0 +1,395 @@
|
|||
/*
|
||||
* FPGA Bridge Framework Driver
|
||||
*
|
||||
* Copyright (C) 2013-2016 Altera Corporation, All Rights Reserved.
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify it
|
||||
* under the terms and conditions of the GNU General Public License,
|
||||
* version 2, as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope it will be useful, but WITHOUT
|
||||
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
|
||||
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
|
||||
* more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License along with
|
||||
* this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
#include <linux/fpga/fpga-bridge.h>
|
||||
#include <linux/idr.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/of_platform.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/spinlock.h>
|
||||
|
||||
static DEFINE_IDA(fpga_bridge_ida);
|
||||
static struct class *fpga_bridge_class;
|
||||
|
||||
/* Lock for adding/removing bridges to linked lists*/
|
||||
spinlock_t bridge_list_lock;
|
||||
|
||||
static int fpga_bridge_of_node_match(struct device *dev, const void *data)
|
||||
{
|
||||
return dev->of_node == data;
|
||||
}
|
||||
|
||||
/**
|
||||
* fpga_bridge_enable - Enable transactions on the bridge
|
||||
*
|
||||
* @bridge: FPGA bridge
|
||||
*
|
||||
* Return: 0 for success, error code otherwise.
|
||||
*/
|
||||
int fpga_bridge_enable(struct fpga_bridge *bridge)
|
||||
{
|
||||
dev_dbg(&bridge->dev, "enable\n");
|
||||
|
||||
if (bridge->br_ops && bridge->br_ops->enable_set)
|
||||
return bridge->br_ops->enable_set(bridge, 1);
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(fpga_bridge_enable);
|
||||
|
||||
/**
|
||||
* fpga_bridge_disable - Disable transactions on the bridge
|
||||
*
|
||||
* @bridge: FPGA bridge
|
||||
*
|
||||
* Return: 0 for success, error code otherwise.
|
||||
*/
|
||||
int fpga_bridge_disable(struct fpga_bridge *bridge)
|
||||
{
|
||||
dev_dbg(&bridge->dev, "disable\n");
|
||||
|
||||
if (bridge->br_ops && bridge->br_ops->enable_set)
|
||||
return bridge->br_ops->enable_set(bridge, 0);
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(fpga_bridge_disable);
|
||||
|
||||
/**
|
||||
* of_fpga_bridge_get - get an exclusive reference to a fpga bridge
|
||||
*
|
||||
* @np: node pointer of a FPGA bridge
|
||||
* @info: fpga image specific information
|
||||
*
|
||||
* Return fpga_bridge struct if successful.
|
||||
* Return -EBUSY if someone already has a reference to the bridge.
|
||||
* Return -ENODEV if @np is not a FPGA Bridge.
|
||||
*/
|
||||
struct fpga_bridge *of_fpga_bridge_get(struct device_node *np,
|
||||
struct fpga_image_info *info)
|
||||
|
||||
{
|
||||
struct device *dev;
|
||||
struct fpga_bridge *bridge;
|
||||
int ret = -ENODEV;
|
||||
|
||||
dev = class_find_device(fpga_bridge_class, NULL, np,
|
||||
fpga_bridge_of_node_match);
|
||||
if (!dev)
|
||||
goto err_dev;
|
||||
|
||||
bridge = to_fpga_bridge(dev);
|
||||
if (!bridge)
|
||||
goto err_dev;
|
||||
|
||||
bridge->info = info;
|
||||
|
||||
if (!mutex_trylock(&bridge->mutex)) {
|
||||
ret = -EBUSY;
|
||||
goto err_dev;
|
||||
}
|
||||
|
||||
if (!try_module_get(dev->parent->driver->owner))
|
||||
goto err_ll_mod;
|
||||
|
||||
dev_dbg(&bridge->dev, "get\n");
|
||||
|
||||
return bridge;
|
||||
|
||||
err_ll_mod:
|
||||
mutex_unlock(&bridge->mutex);
|
||||
err_dev:
|
||||
put_device(dev);
|
||||
return ERR_PTR(ret);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(of_fpga_bridge_get);
|
||||
|
||||
/**
|
||||
* fpga_bridge_put - release a reference to a bridge
|
||||
*
|
||||
* @bridge: FPGA bridge
|
||||
*/
|
||||
void fpga_bridge_put(struct fpga_bridge *bridge)
|
||||
{
|
||||
dev_dbg(&bridge->dev, "put\n");
|
||||
|
||||
bridge->info = NULL;
|
||||
module_put(bridge->dev.parent->driver->owner);
|
||||
mutex_unlock(&bridge->mutex);
|
||||
put_device(&bridge->dev);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(fpga_bridge_put);
|
||||
|
||||
/**
|
||||
* fpga_bridges_enable - enable bridges in a list
|
||||
* @bridge_list: list of FPGA bridges
|
||||
*
|
||||
* Enable each bridge in the list. If list is empty, do nothing.
|
||||
*
|
||||
* Return 0 for success or empty bridge list; return error code otherwise.
|
||||
*/
|
||||
int fpga_bridges_enable(struct list_head *bridge_list)
|
||||
{
|
||||
struct fpga_bridge *bridge;
|
||||
struct list_head *node;
|
||||
int ret;
|
||||
|
||||
list_for_each(node, bridge_list) {
|
||||
bridge = list_entry(node, struct fpga_bridge, node);
|
||||
ret = fpga_bridge_enable(bridge);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(fpga_bridges_enable);
|
||||
|
||||
/**
|
||||
* fpga_bridges_disable - disable bridges in a list
|
||||
*
|
||||
* @bridge_list: list of FPGA bridges
|
||||
*
|
||||
* Disable each bridge in the list. If list is empty, do nothing.
|
||||
*
|
||||
* Return 0 for success or empty bridge list; return error code otherwise.
|
||||
*/
|
||||
int fpga_bridges_disable(struct list_head *bridge_list)
|
||||
{
|
||||
struct fpga_bridge *bridge;
|
||||
struct list_head *node;
|
||||
int ret;
|
||||
|
||||
list_for_each(node, bridge_list) {
|
||||
bridge = list_entry(node, struct fpga_bridge, node);
|
||||
ret = fpga_bridge_disable(bridge);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(fpga_bridges_disable);
|
||||
|
||||
/**
|
||||
* fpga_bridges_put - put bridges
|
||||
*
|
||||
* @bridge_list: list of FPGA bridges
|
||||
*
|
||||
* For each bridge in the list, put the bridge and remove it from the list.
|
||||
* If list is empty, do nothing.
|
||||
*/
|
||||
void fpga_bridges_put(struct list_head *bridge_list)
|
||||
{
|
||||
struct fpga_bridge *bridge;
|
||||
struct list_head *node, *next;
|
||||
unsigned long flags;
|
||||
|
||||
list_for_each_safe(node, next, bridge_list) {
|
||||
bridge = list_entry(node, struct fpga_bridge, node);
|
||||
|
||||
fpga_bridge_put(bridge);
|
||||
|
||||
spin_lock_irqsave(&bridge_list_lock, flags);
|
||||
list_del(&bridge->node);
|
||||
spin_unlock_irqrestore(&bridge_list_lock, flags);
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(fpga_bridges_put);
|
||||
|
||||
/**
|
||||
* fpga_bridges_get_to_list - get a bridge, add it to a list
|
||||
*
|
||||
* @np: node pointer of a FPGA bridge
|
||||
* @info: fpga image specific information
|
||||
* @bridge_list: list of FPGA bridges
|
||||
*
|
||||
* Get an exclusive reference to the bridge and and it to the list.
|
||||
*
|
||||
* Return 0 for success, error code from of_fpga_bridge_get() othewise.
|
||||
*/
|
||||
int fpga_bridge_get_to_list(struct device_node *np,
|
||||
struct fpga_image_info *info,
|
||||
struct list_head *bridge_list)
|
||||
{
|
||||
struct fpga_bridge *bridge;
|
||||
unsigned long flags;
|
||||
|
||||
bridge = of_fpga_bridge_get(np, info);
|
||||
if (IS_ERR(bridge))
|
||||
return PTR_ERR(bridge);
|
||||
|
||||
spin_lock_irqsave(&bridge_list_lock, flags);
|
||||
list_add(&bridge->node, bridge_list);
|
||||
spin_unlock_irqrestore(&bridge_list_lock, flags);
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(fpga_bridge_get_to_list);
|
||||
|
||||
static ssize_t name_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct fpga_bridge *bridge = to_fpga_bridge(dev);
|
||||
|
||||
return sprintf(buf, "%s\n", bridge->name);
|
||||
}
|
||||
|
||||
static ssize_t state_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct fpga_bridge *bridge = to_fpga_bridge(dev);
|
||||
int enable = 1;
|
||||
|
||||
if (bridge->br_ops && bridge->br_ops->enable_show)
|
||||
enable = bridge->br_ops->enable_show(bridge);
|
||||
|
||||
return sprintf(buf, "%s\n", enable ? "enabled" : "disabled");
|
||||
}
|
||||
|
||||
static DEVICE_ATTR_RO(name);
|
||||
static DEVICE_ATTR_RO(state);
|
||||
|
||||
static struct attribute *fpga_bridge_attrs[] = {
|
||||
&dev_attr_name.attr,
|
||||
&dev_attr_state.attr,
|
||||
NULL,
|
||||
};
|
||||
ATTRIBUTE_GROUPS(fpga_bridge);
|
||||
|
||||
/**
|
||||
* fpga_bridge_register - register a fpga bridge driver
|
||||
* @dev: FPGA bridge device from pdev
|
||||
* @name: FPGA bridge name
|
||||
* @br_ops: pointer to structure of fpga bridge ops
|
||||
* @priv: FPGA bridge private data
|
||||
*
|
||||
* Return: 0 for success, error code otherwise.
|
||||
*/
|
||||
int fpga_bridge_register(struct device *dev, const char *name,
|
||||
const struct fpga_bridge_ops *br_ops, void *priv)
|
||||
{
|
||||
struct fpga_bridge *bridge;
|
||||
int id, ret = 0;
|
||||
|
||||
if (!name || !strlen(name)) {
|
||||
dev_err(dev, "Attempt to register with no name!\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
bridge = kzalloc(sizeof(*bridge), GFP_KERNEL);
|
||||
if (!bridge)
|
||||
return -ENOMEM;
|
||||
|
||||
id = ida_simple_get(&fpga_bridge_ida, 0, 0, GFP_KERNEL);
|
||||
if (id < 0) {
|
||||
ret = id;
|
||||
goto error_kfree;
|
||||
}
|
||||
|
||||
mutex_init(&bridge->mutex);
|
||||
INIT_LIST_HEAD(&bridge->node);
|
||||
|
||||
bridge->name = name;
|
||||
bridge->br_ops = br_ops;
|
||||
bridge->priv = priv;
|
||||
|
||||
device_initialize(&bridge->dev);
|
||||
bridge->dev.class = fpga_bridge_class;
|
||||
bridge->dev.parent = dev;
|
||||
bridge->dev.of_node = dev->of_node;
|
||||
bridge->dev.id = id;
|
||||
dev_set_drvdata(dev, bridge);
|
||||
|
||||
ret = dev_set_name(&bridge->dev, "br%d", id);
|
||||
if (ret)
|
||||
goto error_device;
|
||||
|
||||
ret = device_add(&bridge->dev);
|
||||
if (ret)
|
||||
goto error_device;
|
||||
|
||||
of_platform_populate(dev->of_node, NULL, NULL, dev);
|
||||
|
||||
dev_info(bridge->dev.parent, "fpga bridge [%s] registered\n",
|
||||
bridge->name);
|
||||
|
||||
return 0;
|
||||
|
||||
error_device:
|
||||
ida_simple_remove(&fpga_bridge_ida, id);
|
||||
error_kfree:
|
||||
kfree(bridge);
|
||||
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(fpga_bridge_register);
|
||||
|
||||
/**
|
||||
* fpga_bridge_unregister - unregister a fpga bridge driver
|
||||
* @dev: FPGA bridge device from pdev
|
||||
*/
|
||||
void fpga_bridge_unregister(struct device *dev)
|
||||
{
|
||||
struct fpga_bridge *bridge = dev_get_drvdata(dev);
|
||||
|
||||
/*
|
||||
* If the low level driver provides a method for putting bridge into
|
||||
* a desired state upon unregister, do it.
|
||||
*/
|
||||
if (bridge->br_ops && bridge->br_ops->fpga_bridge_remove)
|
||||
bridge->br_ops->fpga_bridge_remove(bridge);
|
||||
|
||||
device_unregister(&bridge->dev);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(fpga_bridge_unregister);
|
||||
|
||||
static void fpga_bridge_dev_release(struct device *dev)
|
||||
{
|
||||
struct fpga_bridge *bridge = to_fpga_bridge(dev);
|
||||
|
||||
ida_simple_remove(&fpga_bridge_ida, bridge->dev.id);
|
||||
kfree(bridge);
|
||||
}
|
||||
|
||||
static int __init fpga_bridge_dev_init(void)
|
||||
{
|
||||
spin_lock_init(&bridge_list_lock);
|
||||
|
||||
fpga_bridge_class = class_create(THIS_MODULE, "fpga_bridge");
|
||||
if (IS_ERR(fpga_bridge_class))
|
||||
return PTR_ERR(fpga_bridge_class);
|
||||
|
||||
fpga_bridge_class->dev_groups = fpga_bridge_groups;
|
||||
fpga_bridge_class->dev_release = fpga_bridge_dev_release;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void __exit fpga_bridge_dev_exit(void)
|
||||
{
|
||||
class_destroy(fpga_bridge_class);
|
||||
ida_destroy(&fpga_bridge_ida);
|
||||
}
|
||||
|
||||
MODULE_DESCRIPTION("FPGA Bridge Driver");
|
||||
MODULE_AUTHOR("Alan Tull <atull@opensource.altera.com>");
|
||||
MODULE_LICENSE("GPL v2");
|
||||
|
||||
subsys_initcall(fpga_bridge_dev_init);
|
||||
module_exit(fpga_bridge_dev_exit);
|
|
@ -32,19 +32,20 @@ static struct class *fpga_mgr_class;
|
|||
/**
|
||||
* fpga_mgr_buf_load - load fpga from image in buffer
|
||||
* @mgr: fpga manager
|
||||
* @flags: flags setting fpga confuration modes
|
||||
* @info: fpga image specific information
|
||||
* @buf: buffer contain fpga image
|
||||
* @count: byte count of buf
|
||||
*
|
||||
* Step the low level fpga manager through the device-specific steps of getting
|
||||
* an FPGA ready to be configured, writing the image to it, then doing whatever
|
||||
* post-configuration steps necessary. This code assumes the caller got the
|
||||
* mgr pointer from of_fpga_mgr_get() and checked that it is not an error code.
|
||||
* mgr pointer from of_fpga_mgr_get() or fpga_mgr_get() and checked that it is
|
||||
* not an error code.
|
||||
*
|
||||
* Return: 0 on success, negative error code otherwise.
|
||||
*/
|
||||
int fpga_mgr_buf_load(struct fpga_manager *mgr, u32 flags, const char *buf,
|
||||
size_t count)
|
||||
int fpga_mgr_buf_load(struct fpga_manager *mgr, struct fpga_image_info *info,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
struct device *dev = &mgr->dev;
|
||||
int ret;
|
||||
|
@ -52,10 +53,12 @@ int fpga_mgr_buf_load(struct fpga_manager *mgr, u32 flags, const char *buf,
|
|||
/*
|
||||
* Call the low level driver's write_init function. This will do the
|
||||
* device-specific things to get the FPGA into the state where it is
|
||||
* ready to receive an FPGA image.
|
||||
* ready to receive an FPGA image. The low level driver only gets to
|
||||
* see the first initial_header_size bytes in the buffer.
|
||||
*/
|
||||
mgr->state = FPGA_MGR_STATE_WRITE_INIT;
|
||||
ret = mgr->mops->write_init(mgr, flags, buf, count);
|
||||
ret = mgr->mops->write_init(mgr, info, buf,
|
||||
min(mgr->mops->initial_header_size, count));
|
||||
if (ret) {
|
||||
dev_err(dev, "Error preparing FPGA for writing\n");
|
||||
mgr->state = FPGA_MGR_STATE_WRITE_INIT_ERR;
|
||||
|
@ -78,7 +81,7 @@ int fpga_mgr_buf_load(struct fpga_manager *mgr, u32 flags, const char *buf,
|
|||
* steps to finish and set the FPGA into operating mode.
|
||||
*/
|
||||
mgr->state = FPGA_MGR_STATE_WRITE_COMPLETE;
|
||||
ret = mgr->mops->write_complete(mgr, flags);
|
||||
ret = mgr->mops->write_complete(mgr, info);
|
||||
if (ret) {
|
||||
dev_err(dev, "Error after writing image data to FPGA\n");
|
||||
mgr->state = FPGA_MGR_STATE_WRITE_COMPLETE_ERR;
|
||||
|
@ -93,17 +96,19 @@ EXPORT_SYMBOL_GPL(fpga_mgr_buf_load);
|
|||
/**
|
||||
* fpga_mgr_firmware_load - request firmware and load to fpga
|
||||
* @mgr: fpga manager
|
||||
* @flags: flags setting fpga confuration modes
|
||||
* @info: fpga image specific information
|
||||
* @image_name: name of image file on the firmware search path
|
||||
*
|
||||
* Request an FPGA image using the firmware class, then write out to the FPGA.
|
||||
* Update the state before each step to provide info on what step failed if
|
||||
* there is a failure. This code assumes the caller got the mgr pointer
|
||||
* from of_fpga_mgr_get() and checked that it is not an error code.
|
||||
* from of_fpga_mgr_get() or fpga_mgr_get() and checked that it is not an error
|
||||
* code.
|
||||
*
|
||||
* Return: 0 on success, negative error code otherwise.
|
||||
*/
|
||||
int fpga_mgr_firmware_load(struct fpga_manager *mgr, u32 flags,
|
||||
int fpga_mgr_firmware_load(struct fpga_manager *mgr,
|
||||
struct fpga_image_info *info,
|
||||
const char *image_name)
|
||||
{
|
||||
struct device *dev = &mgr->dev;
|
||||
|
@ -121,7 +126,7 @@ int fpga_mgr_firmware_load(struct fpga_manager *mgr, u32 flags,
|
|||
return ret;
|
||||
}
|
||||
|
||||
ret = fpga_mgr_buf_load(mgr, flags, fw->data, fw->size);
|
||||
ret = fpga_mgr_buf_load(mgr, info, fw->data, fw->size);
|
||||
|
||||
release_firmware(fw);
|
||||
|
||||
|
@ -181,30 +186,11 @@ static struct attribute *fpga_mgr_attrs[] = {
|
|||
};
|
||||
ATTRIBUTE_GROUPS(fpga_mgr);
|
||||
|
||||
static int fpga_mgr_of_node_match(struct device *dev, const void *data)
|
||||
{
|
||||
return dev->of_node == data;
|
||||
}
|
||||
|
||||
/**
|
||||
* of_fpga_mgr_get - get an exclusive reference to a fpga mgr
|
||||
* @node: device node
|
||||
*
|
||||
* Given a device node, get an exclusive reference to a fpga mgr.
|
||||
*
|
||||
* Return: fpga manager struct or IS_ERR() condition containing error code.
|
||||
*/
|
||||
struct fpga_manager *of_fpga_mgr_get(struct device_node *node)
|
||||
struct fpga_manager *__fpga_mgr_get(struct device *dev)
|
||||
{
|
||||
struct fpga_manager *mgr;
|
||||
struct device *dev;
|
||||
int ret = -ENODEV;
|
||||
|
||||
dev = class_find_device(fpga_mgr_class, NULL, node,
|
||||
fpga_mgr_of_node_match);
|
||||
if (!dev)
|
||||
return ERR_PTR(-ENODEV);
|
||||
|
||||
mgr = to_fpga_manager(dev);
|
||||
if (!mgr)
|
||||
goto err_dev;
|
||||
|
@ -226,6 +212,55 @@ err_dev:
|
|||
put_device(dev);
|
||||
return ERR_PTR(ret);
|
||||
}
|
||||
|
||||
static int fpga_mgr_dev_match(struct device *dev, const void *data)
|
||||
{
|
||||
return dev->parent == data;
|
||||
}
|
||||
|
||||
/**
|
||||
* fpga_mgr_get - get an exclusive reference to a fpga mgr
|
||||
* @dev: parent device that fpga mgr was registered with
|
||||
*
|
||||
* Given a device, get an exclusive reference to a fpga mgr.
|
||||
*
|
||||
* Return: fpga manager struct or IS_ERR() condition containing error code.
|
||||
*/
|
||||
struct fpga_manager *fpga_mgr_get(struct device *dev)
|
||||
{
|
||||
struct device *mgr_dev = class_find_device(fpga_mgr_class, NULL, dev,
|
||||
fpga_mgr_dev_match);
|
||||
if (!mgr_dev)
|
||||
return ERR_PTR(-ENODEV);
|
||||
|
||||
return __fpga_mgr_get(mgr_dev);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(fpga_mgr_get);
|
||||
|
||||
static int fpga_mgr_of_node_match(struct device *dev, const void *data)
|
||||
{
|
||||
return dev->of_node == data;
|
||||
}
|
||||
|
||||
/**
|
||||
* of_fpga_mgr_get - get an exclusive reference to a fpga mgr
|
||||
* @node: device node
|
||||
*
|
||||
* Given a device node, get an exclusive reference to a fpga mgr.
|
||||
*
|
||||
* Return: fpga manager struct or IS_ERR() condition containing error code.
|
||||
*/
|
||||
struct fpga_manager *of_fpga_mgr_get(struct device_node *node)
|
||||
{
|
||||
struct device *dev;
|
||||
|
||||
dev = class_find_device(fpga_mgr_class, NULL, node,
|
||||
fpga_mgr_of_node_match);
|
||||
if (!dev)
|
||||
return ERR_PTR(-ENODEV);
|
||||
|
||||
return __fpga_mgr_get(dev);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(of_fpga_mgr_get);
|
||||
|
||||
/**
|
||||
|
|
|
@ -0,0 +1,603 @@
|
|||
/*
|
||||
* FPGA Region - Device Tree support for FPGA programming under Linux
|
||||
*
|
||||
* Copyright (C) 2013-2016 Altera Corporation
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify it
|
||||
* under the terms and conditions of the GNU General Public License,
|
||||
* version 2, as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope it will be useful, but WITHOUT
|
||||
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
|
||||
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
|
||||
* more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License along with
|
||||
* this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
|
||||
#include <linux/fpga/fpga-bridge.h>
|
||||
#include <linux/fpga/fpga-mgr.h>
|
||||
#include <linux/idr.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/list.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/of_platform.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/spinlock.h>
|
||||
|
||||
/**
|
||||
* struct fpga_region - FPGA Region structure
|
||||
* @dev: FPGA Region device
|
||||
* @mutex: enforces exclusive reference to region
|
||||
* @bridge_list: list of FPGA bridges specified in region
|
||||
* @info: fpga image specific information
|
||||
*/
|
||||
struct fpga_region {
|
||||
struct device dev;
|
||||
struct mutex mutex; /* for exclusive reference to region */
|
||||
struct list_head bridge_list;
|
||||
struct fpga_image_info *info;
|
||||
};
|
||||
|
||||
#define to_fpga_region(d) container_of(d, struct fpga_region, dev)
|
||||
|
||||
static DEFINE_IDA(fpga_region_ida);
|
||||
static struct class *fpga_region_class;
|
||||
|
||||
static const struct of_device_id fpga_region_of_match[] = {
|
||||
{ .compatible = "fpga-region", },
|
||||
{},
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, fpga_region_of_match);
|
||||
|
||||
static int fpga_region_of_node_match(struct device *dev, const void *data)
|
||||
{
|
||||
return dev->of_node == data;
|
||||
}
|
||||
|
||||
/**
|
||||
* fpga_region_find - find FPGA region
|
||||
* @np: device node of FPGA Region
|
||||
* Caller will need to put_device(®ion->dev) when done.
|
||||
* Returns FPGA Region struct or NULL
|
||||
*/
|
||||
static struct fpga_region *fpga_region_find(struct device_node *np)
|
||||
{
|
||||
struct device *dev;
|
||||
|
||||
dev = class_find_device(fpga_region_class, NULL, np,
|
||||
fpga_region_of_node_match);
|
||||
if (!dev)
|
||||
return NULL;
|
||||
|
||||
return to_fpga_region(dev);
|
||||
}
|
||||
|
||||
/**
|
||||
* fpga_region_get - get an exclusive reference to a fpga region
|
||||
* @region: FPGA Region struct
|
||||
*
|
||||
* Caller should call fpga_region_put() when done with region.
|
||||
*
|
||||
* Return fpga_region struct if successful.
|
||||
* Return -EBUSY if someone already has a reference to the region.
|
||||
* Return -ENODEV if @np is not a FPGA Region.
|
||||
*/
|
||||
static struct fpga_region *fpga_region_get(struct fpga_region *region)
|
||||
{
|
||||
struct device *dev = ®ion->dev;
|
||||
|
||||
if (!mutex_trylock(®ion->mutex)) {
|
||||
dev_dbg(dev, "%s: FPGA Region already in use\n", __func__);
|
||||
return ERR_PTR(-EBUSY);
|
||||
}
|
||||
|
||||
get_device(dev);
|
||||
of_node_get(dev->of_node);
|
||||
if (!try_module_get(dev->parent->driver->owner)) {
|
||||
of_node_put(dev->of_node);
|
||||
put_device(dev);
|
||||
mutex_unlock(®ion->mutex);
|
||||
return ERR_PTR(-ENODEV);
|
||||
}
|
||||
|
||||
dev_dbg(®ion->dev, "get\n");
|
||||
|
||||
return region;
|
||||
}
|
||||
|
||||
/**
|
||||
* fpga_region_put - release a reference to a region
|
||||
*
|
||||
* @region: FPGA region
|
||||
*/
|
||||
static void fpga_region_put(struct fpga_region *region)
|
||||
{
|
||||
struct device *dev = ®ion->dev;
|
||||
|
||||
dev_dbg(®ion->dev, "put\n");
|
||||
|
||||
module_put(dev->parent->driver->owner);
|
||||
of_node_put(dev->of_node);
|
||||
put_device(dev);
|
||||
mutex_unlock(®ion->mutex);
|
||||
}
|
||||
|
||||
/**
|
||||
* fpga_region_get_manager - get exclusive reference for FPGA manager
|
||||
* @region: FPGA region
|
||||
*
|
||||
* Get FPGA Manager from "fpga-mgr" property or from ancestor region.
|
||||
*
|
||||
* Caller should call fpga_mgr_put() when done with manager.
|
||||
*
|
||||
* Return: fpga manager struct or IS_ERR() condition containing error code.
|
||||
*/
|
||||
static struct fpga_manager *fpga_region_get_manager(struct fpga_region *region)
|
||||
{
|
||||
struct device *dev = ®ion->dev;
|
||||
struct device_node *np = dev->of_node;
|
||||
struct device_node *mgr_node;
|
||||
struct fpga_manager *mgr;
|
||||
|
||||
of_node_get(np);
|
||||
while (np) {
|
||||
if (of_device_is_compatible(np, "fpga-region")) {
|
||||
mgr_node = of_parse_phandle(np, "fpga-mgr", 0);
|
||||
if (mgr_node) {
|
||||
mgr = of_fpga_mgr_get(mgr_node);
|
||||
of_node_put(np);
|
||||
return mgr;
|
||||
}
|
||||
}
|
||||
np = of_get_next_parent(np);
|
||||
}
|
||||
of_node_put(np);
|
||||
|
||||
return ERR_PTR(-EINVAL);
|
||||
}
|
||||
|
||||
/**
|
||||
* fpga_region_get_bridges - create a list of bridges
|
||||
* @region: FPGA region
|
||||
* @overlay: device node of the overlay
|
||||
*
|
||||
* Create a list of bridges including the parent bridge and the bridges
|
||||
* specified by "fpga-bridges" property. Note that the
|
||||
* fpga_bridges_enable/disable/put functions are all fine with an empty list
|
||||
* if that happens.
|
||||
*
|
||||
* Caller should call fpga_bridges_put(®ion->bridge_list) when
|
||||
* done with the bridges.
|
||||
*
|
||||
* Return 0 for success (even if there are no bridges specified)
|
||||
* or -EBUSY if any of the bridges are in use.
|
||||
*/
|
||||
static int fpga_region_get_bridges(struct fpga_region *region,
|
||||
struct device_node *overlay)
|
||||
{
|
||||
struct device *dev = ®ion->dev;
|
||||
struct device_node *region_np = dev->of_node;
|
||||
struct device_node *br, *np, *parent_br = NULL;
|
||||
int i, ret;
|
||||
|
||||
/* If parent is a bridge, add to list */
|
||||
ret = fpga_bridge_get_to_list(region_np->parent, region->info,
|
||||
®ion->bridge_list);
|
||||
if (ret == -EBUSY)
|
||||
return ret;
|
||||
|
||||
if (!ret)
|
||||
parent_br = region_np->parent;
|
||||
|
||||
/* If overlay has a list of bridges, use it. */
|
||||
if (of_parse_phandle(overlay, "fpga-bridges", 0))
|
||||
np = overlay;
|
||||
else
|
||||
np = region_np;
|
||||
|
||||
for (i = 0; ; i++) {
|
||||
br = of_parse_phandle(np, "fpga-bridges", i);
|
||||
if (!br)
|
||||
break;
|
||||
|
||||
/* If parent bridge is in list, skip it. */
|
||||
if (br == parent_br)
|
||||
continue;
|
||||
|
||||
/* If node is a bridge, get it and add to list */
|
||||
ret = fpga_bridge_get_to_list(br, region->info,
|
||||
®ion->bridge_list);
|
||||
|
||||
/* If any of the bridges are in use, give up */
|
||||
if (ret == -EBUSY) {
|
||||
fpga_bridges_put(®ion->bridge_list);
|
||||
return -EBUSY;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* fpga_region_program_fpga - program FPGA
|
||||
* @region: FPGA region
|
||||
* @firmware_name: name of FPGA image firmware file
|
||||
* @overlay: device node of the overlay
|
||||
* Program an FPGA using information in the device tree.
|
||||
* Function assumes that there is a firmware-name property.
|
||||
* Return 0 for success or negative error code.
|
||||
*/
|
||||
static int fpga_region_program_fpga(struct fpga_region *region,
|
||||
const char *firmware_name,
|
||||
struct device_node *overlay)
|
||||
{
|
||||
struct fpga_manager *mgr;
|
||||
int ret;
|
||||
|
||||
region = fpga_region_get(region);
|
||||
if (IS_ERR(region)) {
|
||||
pr_err("failed to get fpga region\n");
|
||||
return PTR_ERR(region);
|
||||
}
|
||||
|
||||
mgr = fpga_region_get_manager(region);
|
||||
if (IS_ERR(mgr)) {
|
||||
pr_err("failed to get fpga region manager\n");
|
||||
return PTR_ERR(mgr);
|
||||
}
|
||||
|
||||
ret = fpga_region_get_bridges(region, overlay);
|
||||
if (ret) {
|
||||
pr_err("failed to get fpga region bridges\n");
|
||||
goto err_put_mgr;
|
||||
}
|
||||
|
||||
ret = fpga_bridges_disable(®ion->bridge_list);
|
||||
if (ret) {
|
||||
pr_err("failed to disable region bridges\n");
|
||||
goto err_put_br;
|
||||
}
|
||||
|
||||
ret = fpga_mgr_firmware_load(mgr, region->info, firmware_name);
|
||||
if (ret) {
|
||||
pr_err("failed to load fpga image\n");
|
||||
goto err_put_br;
|
||||
}
|
||||
|
||||
ret = fpga_bridges_enable(®ion->bridge_list);
|
||||
if (ret) {
|
||||
pr_err("failed to enable region bridges\n");
|
||||
goto err_put_br;
|
||||
}
|
||||
|
||||
fpga_mgr_put(mgr);
|
||||
fpga_region_put(region);
|
||||
|
||||
return 0;
|
||||
|
||||
err_put_br:
|
||||
fpga_bridges_put(®ion->bridge_list);
|
||||
err_put_mgr:
|
||||
fpga_mgr_put(mgr);
|
||||
fpga_region_put(region);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
* child_regions_with_firmware
|
||||
* @overlay: device node of the overlay
|
||||
*
|
||||
* If the overlay adds child FPGA regions, they are not allowed to have
|
||||
* firmware-name property.
|
||||
*
|
||||
* Return 0 for OK or -EINVAL if child FPGA region adds firmware-name.
|
||||
*/
|
||||
static int child_regions_with_firmware(struct device_node *overlay)
|
||||
{
|
||||
struct device_node *child_region;
|
||||
const char *child_firmware_name;
|
||||
int ret = 0;
|
||||
|
||||
of_node_get(overlay);
|
||||
|
||||
child_region = of_find_matching_node(overlay, fpga_region_of_match);
|
||||
while (child_region) {
|
||||
if (!of_property_read_string(child_region, "firmware-name",
|
||||
&child_firmware_name)) {
|
||||
ret = -EINVAL;
|
||||
break;
|
||||
}
|
||||
child_region = of_find_matching_node(child_region,
|
||||
fpga_region_of_match);
|
||||
}
|
||||
|
||||
of_node_put(child_region);
|
||||
|
||||
if (ret)
|
||||
pr_err("firmware-name not allowed in child FPGA region: %s",
|
||||
child_region->full_name);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
* fpga_region_notify_pre_apply - pre-apply overlay notification
|
||||
*
|
||||
* @region: FPGA region that the overlay was applied to
|
||||
* @nd: overlay notification data
|
||||
*
|
||||
* Called after when an overlay targeted to a FPGA Region is about to be
|
||||
* applied. Function will check the properties that will be added to the FPGA
|
||||
* region. If the checks pass, it will program the FPGA.
|
||||
*
|
||||
* The checks are:
|
||||
* The overlay must add either firmware-name or external-fpga-config property
|
||||
* to the FPGA Region.
|
||||
*
|
||||
* firmware-name : program the FPGA
|
||||
* external-fpga-config : FPGA is already programmed
|
||||
*
|
||||
* The overlay can add other FPGA regions, but child FPGA regions cannot have a
|
||||
* firmware-name property since those regions don't exist yet.
|
||||
*
|
||||
* If the overlay that breaks the rules, notifier returns an error and the
|
||||
* overlay is rejected before it goes into the main tree.
|
||||
*
|
||||
* Returns 0 for success or negative error code for failure.
|
||||
*/
|
||||
static int fpga_region_notify_pre_apply(struct fpga_region *region,
|
||||
struct of_overlay_notify_data *nd)
|
||||
{
|
||||
const char *firmware_name = NULL;
|
||||
struct fpga_image_info *info;
|
||||
int ret;
|
||||
|
||||
info = devm_kzalloc(®ion->dev, sizeof(*info), GFP_KERNEL);
|
||||
if (!info)
|
||||
return -ENOMEM;
|
||||
|
||||
region->info = info;
|
||||
|
||||
/* Reject overlay if child FPGA Regions have firmware-name property */
|
||||
ret = child_regions_with_firmware(nd->overlay);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/* Read FPGA region properties from the overlay */
|
||||
if (of_property_read_bool(nd->overlay, "partial-fpga-config"))
|
||||
info->flags |= FPGA_MGR_PARTIAL_RECONFIG;
|
||||
|
||||
if (of_property_read_bool(nd->overlay, "external-fpga-config"))
|
||||
info->flags |= FPGA_MGR_EXTERNAL_CONFIG;
|
||||
|
||||
of_property_read_string(nd->overlay, "firmware-name", &firmware_name);
|
||||
|
||||
of_property_read_u32(nd->overlay, "region-unfreeze-timeout-us",
|
||||
&info->enable_timeout_us);
|
||||
|
||||
of_property_read_u32(nd->overlay, "region-freeze-timeout-us",
|
||||
&info->disable_timeout_us);
|
||||
|
||||
/* If FPGA was externally programmed, don't specify firmware */
|
||||
if ((info->flags & FPGA_MGR_EXTERNAL_CONFIG) && firmware_name) {
|
||||
pr_err("error: specified firmware and external-fpga-config");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* FPGA is already configured externally. We're done. */
|
||||
if (info->flags & FPGA_MGR_EXTERNAL_CONFIG)
|
||||
return 0;
|
||||
|
||||
/* If we got this far, we should be programming the FPGA */
|
||||
if (!firmware_name) {
|
||||
pr_err("should specify firmware-name or external-fpga-config\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
return fpga_region_program_fpga(region, firmware_name, nd->overlay);
|
||||
}
|
||||
|
||||
/**
|
||||
* fpga_region_notify_post_remove - post-remove overlay notification
|
||||
*
|
||||
* @region: FPGA region that was targeted by the overlay that was removed
|
||||
* @nd: overlay notification data
|
||||
*
|
||||
* Called after an overlay has been removed if the overlay's target was a
|
||||
* FPGA region.
|
||||
*/
|
||||
static void fpga_region_notify_post_remove(struct fpga_region *region,
|
||||
struct of_overlay_notify_data *nd)
|
||||
{
|
||||
fpga_bridges_disable(®ion->bridge_list);
|
||||
fpga_bridges_put(®ion->bridge_list);
|
||||
devm_kfree(®ion->dev, region->info);
|
||||
region->info = NULL;
|
||||
}
|
||||
|
||||
/**
|
||||
* of_fpga_region_notify - reconfig notifier for dynamic DT changes
|
||||
* @nb: notifier block
|
||||
* @action: notifier action
|
||||
* @arg: reconfig data
|
||||
*
|
||||
* This notifier handles programming a FPGA when a "firmware-name" property is
|
||||
* added to a fpga-region.
|
||||
*
|
||||
* Returns NOTIFY_OK or error if FPGA programming fails.
|
||||
*/
|
||||
static int of_fpga_region_notify(struct notifier_block *nb,
|
||||
unsigned long action, void *arg)
|
||||
{
|
||||
struct of_overlay_notify_data *nd = arg;
|
||||
struct fpga_region *region;
|
||||
int ret;
|
||||
|
||||
switch (action) {
|
||||
case OF_OVERLAY_PRE_APPLY:
|
||||
pr_debug("%s OF_OVERLAY_PRE_APPLY\n", __func__);
|
||||
break;
|
||||
case OF_OVERLAY_POST_APPLY:
|
||||
pr_debug("%s OF_OVERLAY_POST_APPLY\n", __func__);
|
||||
return NOTIFY_OK; /* not for us */
|
||||
case OF_OVERLAY_PRE_REMOVE:
|
||||
pr_debug("%s OF_OVERLAY_PRE_REMOVE\n", __func__);
|
||||
return NOTIFY_OK; /* not for us */
|
||||
case OF_OVERLAY_POST_REMOVE:
|
||||
pr_debug("%s OF_OVERLAY_POST_REMOVE\n", __func__);
|
||||
break;
|
||||
default: /* should not happen */
|
||||
return NOTIFY_OK;
|
||||
}
|
||||
|
||||
region = fpga_region_find(nd->target);
|
||||
if (!region)
|
||||
return NOTIFY_OK;
|
||||
|
||||
ret = 0;
|
||||
switch (action) {
|
||||
case OF_OVERLAY_PRE_APPLY:
|
||||
ret = fpga_region_notify_pre_apply(region, nd);
|
||||
break;
|
||||
|
||||
case OF_OVERLAY_POST_REMOVE:
|
||||
fpga_region_notify_post_remove(region, nd);
|
||||
break;
|
||||
}
|
||||
|
||||
put_device(®ion->dev);
|
||||
|
||||
if (ret)
|
||||
return notifier_from_errno(ret);
|
||||
|
||||
return NOTIFY_OK;
|
||||
}
|
||||
|
||||
static struct notifier_block fpga_region_of_nb = {
|
||||
.notifier_call = of_fpga_region_notify,
|
||||
};
|
||||
|
||||
static int fpga_region_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct device *dev = &pdev->dev;
|
||||
struct device_node *np = dev->of_node;
|
||||
struct fpga_region *region;
|
||||
int id, ret = 0;
|
||||
|
||||
region = kzalloc(sizeof(*region), GFP_KERNEL);
|
||||
if (!region)
|
||||
return -ENOMEM;
|
||||
|
||||
id = ida_simple_get(&fpga_region_ida, 0, 0, GFP_KERNEL);
|
||||
if (id < 0) {
|
||||
ret = id;
|
||||
goto err_kfree;
|
||||
}
|
||||
|
||||
mutex_init(®ion->mutex);
|
||||
INIT_LIST_HEAD(®ion->bridge_list);
|
||||
|
||||
device_initialize(®ion->dev);
|
||||
region->dev.class = fpga_region_class;
|
||||
region->dev.parent = dev;
|
||||
region->dev.of_node = np;
|
||||
region->dev.id = id;
|
||||
dev_set_drvdata(dev, region);
|
||||
|
||||
ret = dev_set_name(®ion->dev, "region%d", id);
|
||||
if (ret)
|
||||
goto err_remove;
|
||||
|
||||
ret = device_add(®ion->dev);
|
||||
if (ret)
|
||||
goto err_remove;
|
||||
|
||||
of_platform_populate(np, fpga_region_of_match, NULL, ®ion->dev);
|
||||
|
||||
dev_info(dev, "FPGA Region probed\n");
|
||||
|
||||
return 0;
|
||||
|
||||
err_remove:
|
||||
ida_simple_remove(&fpga_region_ida, id);
|
||||
err_kfree:
|
||||
kfree(region);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int fpga_region_remove(struct platform_device *pdev)
|
||||
{
|
||||
struct fpga_region *region = platform_get_drvdata(pdev);
|
||||
|
||||
device_unregister(®ion->dev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct platform_driver fpga_region_driver = {
|
||||
.probe = fpga_region_probe,
|
||||
.remove = fpga_region_remove,
|
||||
.driver = {
|
||||
.name = "fpga-region",
|
||||
.of_match_table = of_match_ptr(fpga_region_of_match),
|
||||
},
|
||||
};
|
||||
|
||||
static void fpga_region_dev_release(struct device *dev)
|
||||
{
|
||||
struct fpga_region *region = to_fpga_region(dev);
|
||||
|
||||
ida_simple_remove(&fpga_region_ida, region->dev.id);
|
||||
kfree(region);
|
||||
}
|
||||
|
||||
/**
|
||||
* fpga_region_init - init function for fpga_region class
|
||||
* Creates the fpga_region class and registers a reconfig notifier.
|
||||
*/
|
||||
static int __init fpga_region_init(void)
|
||||
{
|
||||
int ret;
|
||||
|
||||
fpga_region_class = class_create(THIS_MODULE, "fpga_region");
|
||||
if (IS_ERR(fpga_region_class))
|
||||
return PTR_ERR(fpga_region_class);
|
||||
|
||||
fpga_region_class->dev_release = fpga_region_dev_release;
|
||||
|
||||
ret = of_overlay_notifier_register(&fpga_region_of_nb);
|
||||
if (ret)
|
||||
goto err_class;
|
||||
|
||||
ret = platform_driver_register(&fpga_region_driver);
|
||||
if (ret)
|
||||
goto err_plat;
|
||||
|
||||
return 0;
|
||||
|
||||
err_plat:
|
||||
of_overlay_notifier_unregister(&fpga_region_of_nb);
|
||||
err_class:
|
||||
class_destroy(fpga_region_class);
|
||||
ida_destroy(&fpga_region_ida);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void __exit fpga_region_exit(void)
|
||||
{
|
||||
platform_driver_unregister(&fpga_region_driver);
|
||||
of_overlay_notifier_unregister(&fpga_region_of_nb);
|
||||
class_destroy(fpga_region_class);
|
||||
ida_destroy(&fpga_region_ida);
|
||||
}
|
||||
|
||||
subsys_initcall(fpga_region_init);
|
||||
module_exit(fpga_region_exit);
|
||||
|
||||
MODULE_DESCRIPTION("FPGA Region");
|
||||
MODULE_AUTHOR("Alan Tull <atull@opensource.altera.com>");
|
||||
MODULE_LICENSE("GPL v2");
|
|
@ -0,0 +1,557 @@
|
|||
/*
|
||||
* FPGA Manager Driver for Altera Arria10 SoCFPGA
|
||||
*
|
||||
* Copyright (C) 2015-2016 Altera Corporation
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify it
|
||||
* under the terms and conditions of the GNU General Public License,
|
||||
* version 2, as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope it will be useful, but WITHOUT
|
||||
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
|
||||
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
|
||||
* more details.
|
||||
*
|
||||
* You should have received a copy of the GNU General Public License along with
|
||||
* this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
|
||||
#include <linux/clk.h>
|
||||
#include <linux/device.h>
|
||||
#include <linux/delay.h>
|
||||
#include <linux/fpga/fpga-mgr.h>
|
||||
#include <linux/io.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/of_address.h>
|
||||
#include <linux/regmap.h>
|
||||
|
||||
#define A10_FPGAMGR_DCLKCNT_OFST 0x08
|
||||
#define A10_FPGAMGR_DCLKSTAT_OFST 0x0c
|
||||
#define A10_FPGAMGR_IMGCFG_CTL_00_OFST 0x70
|
||||
#define A10_FPGAMGR_IMGCFG_CTL_01_OFST 0x74
|
||||
#define A10_FPGAMGR_IMGCFG_CTL_02_OFST 0x78
|
||||
#define A10_FPGAMGR_IMGCFG_STAT_OFST 0x80
|
||||
|
||||
#define A10_FPGAMGR_DCLKSTAT_DCLKDONE BIT(0)
|
||||
|
||||
#define A10_FPGAMGR_IMGCFG_CTL_00_S2F_NENABLE_NCONFIG BIT(0)
|
||||
#define A10_FPGAMGR_IMGCFG_CTL_00_S2F_NENABLE_NSTATUS BIT(1)
|
||||
#define A10_FPGAMGR_IMGCFG_CTL_00_S2F_NENABLE_CONDONE BIT(2)
|
||||
#define A10_FPGAMGR_IMGCFG_CTL_00_S2F_NCONFIG BIT(8)
|
||||
#define A10_FPGAMGR_IMGCFG_CTL_00_S2F_NSTATUS_OE BIT(16)
|
||||
#define A10_FPGAMGR_IMGCFG_CTL_00_S2F_CONDONE_OE BIT(24)
|
||||
|
||||
#define A10_FPGAMGR_IMGCFG_CTL_01_S2F_NENABLE_CONFIG BIT(0)
|
||||
#define A10_FPGAMGR_IMGCFG_CTL_01_S2F_PR_REQUEST BIT(16)
|
||||
#define A10_FPGAMGR_IMGCFG_CTL_01_S2F_NCE BIT(24)
|
||||
|
||||
#define A10_FPGAMGR_IMGCFG_CTL_02_EN_CFG_CTRL BIT(0)
|
||||
#define A10_FPGAMGR_IMGCFG_CTL_02_CDRATIO_MASK (BIT(16) | BIT(17))
|
||||
#define A10_FPGAMGR_IMGCFG_CTL_02_CDRATIO_SHIFT 16
|
||||
#define A10_FPGAMGR_IMGCFG_CTL_02_CFGWIDTH BIT(24)
|
||||
#define A10_FPGAMGR_IMGCFG_CTL_02_CFGWIDTH_SHIFT 24
|
||||
|
||||
#define A10_FPGAMGR_IMGCFG_STAT_F2S_CRC_ERROR BIT(0)
|
||||
#define A10_FPGAMGR_IMGCFG_STAT_F2S_EARLY_USERMODE BIT(1)
|
||||
#define A10_FPGAMGR_IMGCFG_STAT_F2S_USERMODE BIT(2)
|
||||
#define A10_FPGAMGR_IMGCFG_STAT_F2S_NSTATUS_PIN BIT(4)
|
||||
#define A10_FPGAMGR_IMGCFG_STAT_F2S_CONDONE_PIN BIT(6)
|
||||
#define A10_FPGAMGR_IMGCFG_STAT_F2S_PR_READY BIT(9)
|
||||
#define A10_FPGAMGR_IMGCFG_STAT_F2S_PR_DONE BIT(10)
|
||||
#define A10_FPGAMGR_IMGCFG_STAT_F2S_PR_ERROR BIT(11)
|
||||
#define A10_FPGAMGR_IMGCFG_STAT_F2S_NCONFIG_PIN BIT(12)
|
||||
#define A10_FPGAMGR_IMGCFG_STAT_F2S_MSEL_MASK (BIT(16) | BIT(17) | BIT(18))
|
||||
#define A10_FPGAMGR_IMGCFG_STAT_F2S_MSEL_SHIFT 16
|
||||
|
||||
/* FPGA CD Ratio Value */
|
||||
#define CDRATIO_x1 0x0
|
||||
#define CDRATIO_x2 0x1
|
||||
#define CDRATIO_x4 0x2
|
||||
#define CDRATIO_x8 0x3
|
||||
|
||||
/* Configuration width 16/32 bit */
|
||||
#define CFGWDTH_32 1
|
||||
#define CFGWDTH_16 0
|
||||
|
||||
/*
|
||||
* struct a10_fpga_priv - private data for fpga manager
|
||||
* @regmap: regmap for register access
|
||||
* @fpga_data_addr: iomap for single address data register to FPGA
|
||||
* @clk: clock
|
||||
*/
|
||||
struct a10_fpga_priv {
|
||||
struct regmap *regmap;
|
||||
void __iomem *fpga_data_addr;
|
||||
struct clk *clk;
|
||||
};
|
||||
|
||||
static bool socfpga_a10_fpga_writeable_reg(struct device *dev, unsigned int reg)
|
||||
{
|
||||
switch (reg) {
|
||||
case A10_FPGAMGR_DCLKCNT_OFST:
|
||||
case A10_FPGAMGR_DCLKSTAT_OFST:
|
||||
case A10_FPGAMGR_IMGCFG_CTL_00_OFST:
|
||||
case A10_FPGAMGR_IMGCFG_CTL_01_OFST:
|
||||
case A10_FPGAMGR_IMGCFG_CTL_02_OFST:
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
static bool socfpga_a10_fpga_readable_reg(struct device *dev, unsigned int reg)
|
||||
{
|
||||
switch (reg) {
|
||||
case A10_FPGAMGR_DCLKCNT_OFST:
|
||||
case A10_FPGAMGR_DCLKSTAT_OFST:
|
||||
case A10_FPGAMGR_IMGCFG_CTL_00_OFST:
|
||||
case A10_FPGAMGR_IMGCFG_CTL_01_OFST:
|
||||
case A10_FPGAMGR_IMGCFG_CTL_02_OFST:
|
||||
case A10_FPGAMGR_IMGCFG_STAT_OFST:
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
static const struct regmap_config socfpga_a10_fpga_regmap_config = {
|
||||
.reg_bits = 32,
|
||||
.reg_stride = 4,
|
||||
.val_bits = 32,
|
||||
.writeable_reg = socfpga_a10_fpga_writeable_reg,
|
||||
.readable_reg = socfpga_a10_fpga_readable_reg,
|
||||
.max_register = A10_FPGAMGR_IMGCFG_STAT_OFST,
|
||||
.cache_type = REGCACHE_NONE,
|
||||
};
|
||||
|
||||
/*
|
||||
* from the register map description of cdratio in imgcfg_ctrl_02:
|
||||
* Normal Configuration : 32bit Passive Parallel
|
||||
* Partial Reconfiguration : 16bit Passive Parallel
|
||||
*/
|
||||
static void socfpga_a10_fpga_set_cfg_width(struct a10_fpga_priv *priv,
|
||||
int width)
|
||||
{
|
||||
width <<= A10_FPGAMGR_IMGCFG_CTL_02_CFGWIDTH_SHIFT;
|
||||
|
||||
regmap_update_bits(priv->regmap, A10_FPGAMGR_IMGCFG_CTL_02_OFST,
|
||||
A10_FPGAMGR_IMGCFG_CTL_02_CFGWIDTH, width);
|
||||
}
|
||||
|
||||
static void socfpga_a10_fpga_generate_dclks(struct a10_fpga_priv *priv,
|
||||
u32 count)
|
||||
{
|
||||
u32 val;
|
||||
|
||||
/* Clear any existing DONE status. */
|
||||
regmap_write(priv->regmap, A10_FPGAMGR_DCLKSTAT_OFST,
|
||||
A10_FPGAMGR_DCLKSTAT_DCLKDONE);
|
||||
|
||||
/* Issue the DCLK regmap. */
|
||||
regmap_write(priv->regmap, A10_FPGAMGR_DCLKCNT_OFST, count);
|
||||
|
||||
/* wait till the dclkcnt done */
|
||||
regmap_read_poll_timeout(priv->regmap, A10_FPGAMGR_DCLKSTAT_OFST, val,
|
||||
val, 1, 100);
|
||||
|
||||
/* Clear DONE status. */
|
||||
regmap_write(priv->regmap, A10_FPGAMGR_DCLKSTAT_OFST,
|
||||
A10_FPGAMGR_DCLKSTAT_DCLKDONE);
|
||||
}
|
||||
|
||||
#define RBF_ENCRYPTION_MODE_OFFSET 69
|
||||
#define RBF_DECOMPRESS_OFFSET 229
|
||||
|
||||
static int socfpga_a10_fpga_encrypted(u32 *buf32, size_t buf32_size)
|
||||
{
|
||||
if (buf32_size < RBF_ENCRYPTION_MODE_OFFSET + 1)
|
||||
return -EINVAL;
|
||||
|
||||
/* Is the bitstream encrypted? */
|
||||
return ((buf32[RBF_ENCRYPTION_MODE_OFFSET] >> 2) & 3) != 0;
|
||||
}
|
||||
|
||||
static int socfpga_a10_fpga_compressed(u32 *buf32, size_t buf32_size)
|
||||
{
|
||||
if (buf32_size < RBF_DECOMPRESS_OFFSET + 1)
|
||||
return -EINVAL;
|
||||
|
||||
/* Is the bitstream compressed? */
|
||||
return !((buf32[RBF_DECOMPRESS_OFFSET] >> 1) & 1);
|
||||
}
|
||||
|
||||
static unsigned int socfpga_a10_fpga_get_cd_ratio(unsigned int cfg_width,
|
||||
bool encrypt, bool compress)
|
||||
{
|
||||
unsigned int cd_ratio;
|
||||
|
||||
/*
|
||||
* cd ratio is dependent on cfg width and whether the bitstream
|
||||
* is encrypted and/or compressed.
|
||||
*
|
||||
* | width | encr. | compr. | cd ratio |
|
||||
* | 16 | 0 | 0 | 1 |
|
||||
* | 16 | 0 | 1 | 4 |
|
||||
* | 16 | 1 | 0 | 2 |
|
||||
* | 16 | 1 | 1 | 4 |
|
||||
* | 32 | 0 | 0 | 1 |
|
||||
* | 32 | 0 | 1 | 8 |
|
||||
* | 32 | 1 | 0 | 4 |
|
||||
* | 32 | 1 | 1 | 8 |
|
||||
*/
|
||||
if (!compress && !encrypt)
|
||||
return CDRATIO_x1;
|
||||
|
||||
if (compress)
|
||||
cd_ratio = CDRATIO_x4;
|
||||
else
|
||||
cd_ratio = CDRATIO_x2;
|
||||
|
||||
/* If 32 bit, double the cd ratio by incrementing the field */
|
||||
if (cfg_width == CFGWDTH_32)
|
||||
cd_ratio += 1;
|
||||
|
||||
return cd_ratio;
|
||||
}
|
||||
|
||||
static int socfpga_a10_fpga_set_cdratio(struct fpga_manager *mgr,
|
||||
unsigned int cfg_width,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
struct a10_fpga_priv *priv = mgr->priv;
|
||||
unsigned int cd_ratio;
|
||||
int encrypt, compress;
|
||||
|
||||
encrypt = socfpga_a10_fpga_encrypted((u32 *)buf, count / 4);
|
||||
if (encrypt < 0)
|
||||
return -EINVAL;
|
||||
|
||||
compress = socfpga_a10_fpga_compressed((u32 *)buf, count / 4);
|
||||
if (compress < 0)
|
||||
return -EINVAL;
|
||||
|
||||
cd_ratio = socfpga_a10_fpga_get_cd_ratio(cfg_width, encrypt, compress);
|
||||
|
||||
regmap_update_bits(priv->regmap, A10_FPGAMGR_IMGCFG_CTL_02_OFST,
|
||||
A10_FPGAMGR_IMGCFG_CTL_02_CDRATIO_MASK,
|
||||
cd_ratio << A10_FPGAMGR_IMGCFG_CTL_02_CDRATIO_SHIFT);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static u32 socfpga_a10_fpga_read_stat(struct a10_fpga_priv *priv)
|
||||
{
|
||||
u32 val;
|
||||
|
||||
regmap_read(priv->regmap, A10_FPGAMGR_IMGCFG_STAT_OFST, &val);
|
||||
|
||||
return val;
|
||||
}
|
||||
|
||||
static int socfpga_a10_fpga_wait_for_pr_ready(struct a10_fpga_priv *priv)
|
||||
{
|
||||
u32 reg, i;
|
||||
|
||||
for (i = 0; i < 10 ; i++) {
|
||||
reg = socfpga_a10_fpga_read_stat(priv);
|
||||
|
||||
if (reg & A10_FPGAMGR_IMGCFG_STAT_F2S_PR_ERROR)
|
||||
return -EINVAL;
|
||||
|
||||
if (reg & A10_FPGAMGR_IMGCFG_STAT_F2S_PR_READY)
|
||||
return 0;
|
||||
}
|
||||
|
||||
return -ETIMEDOUT;
|
||||
}
|
||||
|
||||
static int socfpga_a10_fpga_wait_for_pr_done(struct a10_fpga_priv *priv)
|
||||
{
|
||||
u32 reg, i;
|
||||
|
||||
for (i = 0; i < 10 ; i++) {
|
||||
reg = socfpga_a10_fpga_read_stat(priv);
|
||||
|
||||
if (reg & A10_FPGAMGR_IMGCFG_STAT_F2S_PR_ERROR)
|
||||
return -EINVAL;
|
||||
|
||||
if (reg & A10_FPGAMGR_IMGCFG_STAT_F2S_PR_DONE)
|
||||
return 0;
|
||||
}
|
||||
|
||||
return -ETIMEDOUT;
|
||||
}
|
||||
|
||||
/* Start the FPGA programming by initialize the FPGA Manager */
|
||||
static int socfpga_a10_fpga_write_init(struct fpga_manager *mgr,
|
||||
struct fpga_image_info *info,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
struct a10_fpga_priv *priv = mgr->priv;
|
||||
unsigned int cfg_width;
|
||||
u32 msel, stat, mask;
|
||||
int ret;
|
||||
|
||||
if (info->flags & FPGA_MGR_PARTIAL_RECONFIG)
|
||||
cfg_width = CFGWDTH_16;
|
||||
else
|
||||
return -EINVAL;
|
||||
|
||||
/* Check for passive parallel (msel == 000 or 001) */
|
||||
msel = socfpga_a10_fpga_read_stat(priv);
|
||||
msel &= A10_FPGAMGR_IMGCFG_STAT_F2S_MSEL_MASK;
|
||||
msel >>= A10_FPGAMGR_IMGCFG_STAT_F2S_MSEL_SHIFT;
|
||||
if ((msel != 0) && (msel != 1)) {
|
||||
dev_dbg(&mgr->dev, "Fail: invalid msel=%d\n", msel);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* Make sure no external devices are interfering */
|
||||
stat = socfpga_a10_fpga_read_stat(priv);
|
||||
mask = A10_FPGAMGR_IMGCFG_STAT_F2S_NCONFIG_PIN |
|
||||
A10_FPGAMGR_IMGCFG_STAT_F2S_NSTATUS_PIN;
|
||||
if ((stat & mask) != mask)
|
||||
return -EINVAL;
|
||||
|
||||
/* Set cfg width */
|
||||
socfpga_a10_fpga_set_cfg_width(priv, cfg_width);
|
||||
|
||||
/* Determine cd ratio from bitstream header and set cd ratio */
|
||||
ret = socfpga_a10_fpga_set_cdratio(mgr, cfg_width, buf, count);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/*
|
||||
* Clear s2f_nce to enable chip select. Leave pr_request
|
||||
* unasserted and override disabled.
|
||||
*/
|
||||
regmap_write(priv->regmap, A10_FPGAMGR_IMGCFG_CTL_01_OFST,
|
||||
A10_FPGAMGR_IMGCFG_CTL_01_S2F_NENABLE_CONFIG);
|
||||
|
||||
/* Set cfg_ctrl to enable s2f dclk and data */
|
||||
regmap_update_bits(priv->regmap, A10_FPGAMGR_IMGCFG_CTL_02_OFST,
|
||||
A10_FPGAMGR_IMGCFG_CTL_02_EN_CFG_CTRL,
|
||||
A10_FPGAMGR_IMGCFG_CTL_02_EN_CFG_CTRL);
|
||||
|
||||
/*
|
||||
* Disable overrides not needed for pr.
|
||||
* s2f_config==1 leaves reset deasseted.
|
||||
*/
|
||||
regmap_write(priv->regmap, A10_FPGAMGR_IMGCFG_CTL_00_OFST,
|
||||
A10_FPGAMGR_IMGCFG_CTL_00_S2F_NENABLE_NCONFIG |
|
||||
A10_FPGAMGR_IMGCFG_CTL_00_S2F_NENABLE_NSTATUS |
|
||||
A10_FPGAMGR_IMGCFG_CTL_00_S2F_NENABLE_CONDONE |
|
||||
A10_FPGAMGR_IMGCFG_CTL_00_S2F_NCONFIG);
|
||||
|
||||
/* Enable override for data, dclk, nce, and pr_request to CSS */
|
||||
regmap_update_bits(priv->regmap, A10_FPGAMGR_IMGCFG_CTL_01_OFST,
|
||||
A10_FPGAMGR_IMGCFG_CTL_01_S2F_NENABLE_CONFIG, 0);
|
||||
|
||||
/* Send some clocks to clear out any errors */
|
||||
socfpga_a10_fpga_generate_dclks(priv, 256);
|
||||
|
||||
/* Assert pr_request */
|
||||
regmap_update_bits(priv->regmap, A10_FPGAMGR_IMGCFG_CTL_01_OFST,
|
||||
A10_FPGAMGR_IMGCFG_CTL_01_S2F_PR_REQUEST,
|
||||
A10_FPGAMGR_IMGCFG_CTL_01_S2F_PR_REQUEST);
|
||||
|
||||
/* Provide 2048 DCLKs before starting the config data streaming. */
|
||||
socfpga_a10_fpga_generate_dclks(priv, 0x7ff);
|
||||
|
||||
/* Wait for pr_ready */
|
||||
return socfpga_a10_fpga_wait_for_pr_ready(priv);
|
||||
}
|
||||
|
||||
/*
|
||||
* write data to the FPGA data register
|
||||
*/
|
||||
static int socfpga_a10_fpga_write(struct fpga_manager *mgr, const char *buf,
|
||||
size_t count)
|
||||
{
|
||||
struct a10_fpga_priv *priv = mgr->priv;
|
||||
u32 *buffer_32 = (u32 *)buf;
|
||||
size_t i = 0;
|
||||
|
||||
if (count <= 0)
|
||||
return -EINVAL;
|
||||
|
||||
/* Write out the complete 32-bit chunks */
|
||||
while (count >= sizeof(u32)) {
|
||||
writel(buffer_32[i++], priv->fpga_data_addr);
|
||||
count -= sizeof(u32);
|
||||
}
|
||||
|
||||
/* Write out remaining non 32-bit chunks */
|
||||
switch (count) {
|
||||
case 3:
|
||||
writel(buffer_32[i++] & 0x00ffffff, priv->fpga_data_addr);
|
||||
break;
|
||||
case 2:
|
||||
writel(buffer_32[i++] & 0x0000ffff, priv->fpga_data_addr);
|
||||
break;
|
||||
case 1:
|
||||
writel(buffer_32[i++] & 0x000000ff, priv->fpga_data_addr);
|
||||
break;
|
||||
case 0:
|
||||
break;
|
||||
default:
|
||||
/* This will never happen */
|
||||
return -EFAULT;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int socfpga_a10_fpga_write_complete(struct fpga_manager *mgr,
|
||||
struct fpga_image_info *info)
|
||||
{
|
||||
struct a10_fpga_priv *priv = mgr->priv;
|
||||
u32 reg;
|
||||
int ret;
|
||||
|
||||
/* Wait for pr_done */
|
||||
ret = socfpga_a10_fpga_wait_for_pr_done(priv);
|
||||
|
||||
/* Clear pr_request */
|
||||
regmap_update_bits(priv->regmap, A10_FPGAMGR_IMGCFG_CTL_01_OFST,
|
||||
A10_FPGAMGR_IMGCFG_CTL_01_S2F_PR_REQUEST, 0);
|
||||
|
||||
/* Send some clocks to clear out any errors */
|
||||
socfpga_a10_fpga_generate_dclks(priv, 256);
|
||||
|
||||
/* Disable s2f dclk and data */
|
||||
regmap_update_bits(priv->regmap, A10_FPGAMGR_IMGCFG_CTL_02_OFST,
|
||||
A10_FPGAMGR_IMGCFG_CTL_02_EN_CFG_CTRL, 0);
|
||||
|
||||
/* Deassert chip select */
|
||||
regmap_update_bits(priv->regmap, A10_FPGAMGR_IMGCFG_CTL_01_OFST,
|
||||
A10_FPGAMGR_IMGCFG_CTL_01_S2F_NCE,
|
||||
A10_FPGAMGR_IMGCFG_CTL_01_S2F_NCE);
|
||||
|
||||
/* Disable data, dclk, nce, and pr_request override to CSS */
|
||||
regmap_update_bits(priv->regmap, A10_FPGAMGR_IMGCFG_CTL_01_OFST,
|
||||
A10_FPGAMGR_IMGCFG_CTL_01_S2F_NENABLE_CONFIG,
|
||||
A10_FPGAMGR_IMGCFG_CTL_01_S2F_NENABLE_CONFIG);
|
||||
|
||||
/* Return any errors regarding pr_done or pr_error */
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/* Final check */
|
||||
reg = socfpga_a10_fpga_read_stat(priv);
|
||||
|
||||
if (((reg & A10_FPGAMGR_IMGCFG_STAT_F2S_USERMODE) == 0) ||
|
||||
((reg & A10_FPGAMGR_IMGCFG_STAT_F2S_CONDONE_PIN) == 0) ||
|
||||
((reg & A10_FPGAMGR_IMGCFG_STAT_F2S_NSTATUS_PIN) == 0)) {
|
||||
dev_dbg(&mgr->dev,
|
||||
"Timeout in final check. Status=%08xf\n", reg);
|
||||
return -ETIMEDOUT;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static enum fpga_mgr_states socfpga_a10_fpga_state(struct fpga_manager *mgr)
|
||||
{
|
||||
struct a10_fpga_priv *priv = mgr->priv;
|
||||
u32 reg = socfpga_a10_fpga_read_stat(priv);
|
||||
|
||||
if (reg & A10_FPGAMGR_IMGCFG_STAT_F2S_USERMODE)
|
||||
return FPGA_MGR_STATE_OPERATING;
|
||||
|
||||
if (reg & A10_FPGAMGR_IMGCFG_STAT_F2S_PR_READY)
|
||||
return FPGA_MGR_STATE_WRITE;
|
||||
|
||||
if (reg & A10_FPGAMGR_IMGCFG_STAT_F2S_CRC_ERROR)
|
||||
return FPGA_MGR_STATE_WRITE_COMPLETE_ERR;
|
||||
|
||||
if ((reg & A10_FPGAMGR_IMGCFG_STAT_F2S_NSTATUS_PIN) == 0)
|
||||
return FPGA_MGR_STATE_RESET;
|
||||
|
||||
return FPGA_MGR_STATE_UNKNOWN;
|
||||
}
|
||||
|
||||
static const struct fpga_manager_ops socfpga_a10_fpga_mgr_ops = {
|
||||
.initial_header_size = (RBF_DECOMPRESS_OFFSET + 1) * 4,
|
||||
.state = socfpga_a10_fpga_state,
|
||||
.write_init = socfpga_a10_fpga_write_init,
|
||||
.write = socfpga_a10_fpga_write,
|
||||
.write_complete = socfpga_a10_fpga_write_complete,
|
||||
};
|
||||
|
||||
static int socfpga_a10_fpga_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct device *dev = &pdev->dev;
|
||||
struct a10_fpga_priv *priv;
|
||||
void __iomem *reg_base;
|
||||
struct resource *res;
|
||||
int ret;
|
||||
|
||||
priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
|
||||
if (!priv)
|
||||
return -ENOMEM;
|
||||
|
||||
/* First mmio base is for register access */
|
||||
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
|
||||
reg_base = devm_ioremap_resource(dev, res);
|
||||
if (IS_ERR(reg_base))
|
||||
return PTR_ERR(reg_base);
|
||||
|
||||
/* Second mmio base is for writing FPGA image data */
|
||||
res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
|
||||
priv->fpga_data_addr = devm_ioremap_resource(dev, res);
|
||||
if (IS_ERR(priv->fpga_data_addr))
|
||||
return PTR_ERR(priv->fpga_data_addr);
|
||||
|
||||
/* regmap for register access */
|
||||
priv->regmap = devm_regmap_init_mmio(dev, reg_base,
|
||||
&socfpga_a10_fpga_regmap_config);
|
||||
if (IS_ERR(priv->regmap))
|
||||
return -ENODEV;
|
||||
|
||||
priv->clk = devm_clk_get(dev, NULL);
|
||||
if (IS_ERR(priv->clk)) {
|
||||
dev_err(dev, "no clock specified\n");
|
||||
return PTR_ERR(priv->clk);
|
||||
}
|
||||
|
||||
ret = clk_prepare_enable(priv->clk);
|
||||
if (ret) {
|
||||
dev_err(dev, "could not enable clock\n");
|
||||
return -EBUSY;
|
||||
}
|
||||
|
||||
return fpga_mgr_register(dev, "SoCFPGA Arria10 FPGA Manager",
|
||||
&socfpga_a10_fpga_mgr_ops, priv);
|
||||
}
|
||||
|
||||
static int socfpga_a10_fpga_remove(struct platform_device *pdev)
|
||||
{
|
||||
struct fpga_manager *mgr = platform_get_drvdata(pdev);
|
||||
struct a10_fpga_priv *priv = mgr->priv;
|
||||
|
||||
fpga_mgr_unregister(&pdev->dev);
|
||||
clk_disable_unprepare(priv->clk);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct of_device_id socfpga_a10_fpga_of_match[] = {
|
||||
{ .compatible = "altr,socfpga-a10-fpga-mgr", },
|
||||
{},
|
||||
};
|
||||
|
||||
MODULE_DEVICE_TABLE(of, socfpga_a10_fpga_of_match);
|
||||
|
||||
static struct platform_driver socfpga_a10_fpga_driver = {
|
||||
.probe = socfpga_a10_fpga_probe,
|
||||
.remove = socfpga_a10_fpga_remove,
|
||||
.driver = {
|
||||
.name = "socfpga_a10_fpga_manager",
|
||||
.of_match_table = socfpga_a10_fpga_of_match,
|
||||
},
|
||||
};
|
||||
|
||||
module_platform_driver(socfpga_a10_fpga_driver);
|
||||
|
||||
MODULE_AUTHOR("Alan Tull <atull@opensource.altera.com>");
|
||||
MODULE_DESCRIPTION("SoCFPGA Arria10 FPGA Manager");
|
||||
MODULE_LICENSE("GPL v2");
|
|
@ -407,13 +407,14 @@ static int socfpga_fpga_reset(struct fpga_manager *mgr)
|
|||
/*
|
||||
* Prepare the FPGA to receive the configuration data.
|
||||
*/
|
||||
static int socfpga_fpga_ops_configure_init(struct fpga_manager *mgr, u32 flags,
|
||||
static int socfpga_fpga_ops_configure_init(struct fpga_manager *mgr,
|
||||
struct fpga_image_info *info,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
struct socfpga_fpga_priv *priv = mgr->priv;
|
||||
int ret;
|
||||
|
||||
if (flags & FPGA_MGR_PARTIAL_RECONFIG) {
|
||||
if (info->flags & FPGA_MGR_PARTIAL_RECONFIG) {
|
||||
dev_err(&mgr->dev, "Partial reconfiguration not supported.\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
@ -478,7 +479,7 @@ static int socfpga_fpga_ops_configure_write(struct fpga_manager *mgr,
|
|||
}
|
||||
|
||||
static int socfpga_fpga_ops_configure_complete(struct fpga_manager *mgr,
|
||||
u32 flags)
|
||||
struct fpga_image_info *info)
|
||||
{
|
||||
struct socfpga_fpga_priv *priv = mgr->priv;
|
||||
u32 status;
|
||||
|
|
|
@ -118,7 +118,6 @@
|
|||
#define FPGA_RST_NONE_MASK 0x0
|
||||
|
||||
struct zynq_fpga_priv {
|
||||
struct device *dev;
|
||||
int irq;
|
||||
struct clk *clk;
|
||||
|
||||
|
@ -175,7 +174,8 @@ static irqreturn_t zynq_fpga_isr(int irq, void *data)
|
|||
return IRQ_HANDLED;
|
||||
}
|
||||
|
||||
static int zynq_fpga_ops_write_init(struct fpga_manager *mgr, u32 flags,
|
||||
static int zynq_fpga_ops_write_init(struct fpga_manager *mgr,
|
||||
struct fpga_image_info *info,
|
||||
const char *buf, size_t count)
|
||||
{
|
||||
struct zynq_fpga_priv *priv;
|
||||
|
@ -189,7 +189,7 @@ static int zynq_fpga_ops_write_init(struct fpga_manager *mgr, u32 flags,
|
|||
return err;
|
||||
|
||||
/* don't globally reset PL if we're doing partial reconfig */
|
||||
if (!(flags & FPGA_MGR_PARTIAL_RECONFIG)) {
|
||||
if (!(info->flags & FPGA_MGR_PARTIAL_RECONFIG)) {
|
||||
/* assert AXI interface resets */
|
||||
regmap_write(priv->slcr, SLCR_FPGA_RST_CTRL_OFFSET,
|
||||
FPGA_RST_ALL_MASK);
|
||||
|
@ -217,7 +217,7 @@ static int zynq_fpga_ops_write_init(struct fpga_manager *mgr, u32 flags,
|
|||
INIT_POLL_DELAY,
|
||||
INIT_POLL_TIMEOUT);
|
||||
if (err) {
|
||||
dev_err(priv->dev, "Timeout waiting for PCFG_INIT");
|
||||
dev_err(&mgr->dev, "Timeout waiting for PCFG_INIT\n");
|
||||
goto out_err;
|
||||
}
|
||||
|
||||
|
@ -231,7 +231,7 @@ static int zynq_fpga_ops_write_init(struct fpga_manager *mgr, u32 flags,
|
|||
INIT_POLL_DELAY,
|
||||
INIT_POLL_TIMEOUT);
|
||||
if (err) {
|
||||
dev_err(priv->dev, "Timeout waiting for !PCFG_INIT");
|
||||
dev_err(&mgr->dev, "Timeout waiting for !PCFG_INIT\n");
|
||||
goto out_err;
|
||||
}
|
||||
|
||||
|
@ -245,7 +245,7 @@ static int zynq_fpga_ops_write_init(struct fpga_manager *mgr, u32 flags,
|
|||
INIT_POLL_DELAY,
|
||||
INIT_POLL_TIMEOUT);
|
||||
if (err) {
|
||||
dev_err(priv->dev, "Timeout waiting for PCFG_INIT");
|
||||
dev_err(&mgr->dev, "Timeout waiting for PCFG_INIT\n");
|
||||
goto out_err;
|
||||
}
|
||||
}
|
||||
|
@ -262,7 +262,7 @@ static int zynq_fpga_ops_write_init(struct fpga_manager *mgr, u32 flags,
|
|||
/* check that we have room in the command queue */
|
||||
status = zynq_fpga_read(priv, STATUS_OFFSET);
|
||||
if (status & STATUS_DMA_Q_F) {
|
||||
dev_err(priv->dev, "DMA command queue full");
|
||||
dev_err(&mgr->dev, "DMA command queue full\n");
|
||||
err = -EBUSY;
|
||||
goto out_err;
|
||||
}
|
||||
|
@ -295,7 +295,8 @@ static int zynq_fpga_ops_write(struct fpga_manager *mgr,
|
|||
in_count = count;
|
||||
priv = mgr->priv;
|
||||
|
||||
kbuf = dma_alloc_coherent(priv->dev, count, &dma_addr, GFP_KERNEL);
|
||||
kbuf =
|
||||
dma_alloc_coherent(mgr->dev.parent, count, &dma_addr, GFP_KERNEL);
|
||||
if (!kbuf)
|
||||
return -ENOMEM;
|
||||
|
||||
|
@ -331,19 +332,19 @@ static int zynq_fpga_ops_write(struct fpga_manager *mgr,
|
|||
zynq_fpga_write(priv, INT_STS_OFFSET, intr_status);
|
||||
|
||||
if (!((intr_status & IXR_D_P_DONE_MASK) == IXR_D_P_DONE_MASK)) {
|
||||
dev_err(priv->dev, "Error configuring FPGA");
|
||||
dev_err(&mgr->dev, "Error configuring FPGA\n");
|
||||
err = -EFAULT;
|
||||
}
|
||||
|
||||
clk_disable(priv->clk);
|
||||
|
||||
out_free:
|
||||
dma_free_coherent(priv->dev, in_count, kbuf, dma_addr);
|
||||
|
||||
dma_free_coherent(mgr->dev.parent, count, kbuf, dma_addr);
|
||||
return err;
|
||||
}
|
||||
|
||||
static int zynq_fpga_ops_write_complete(struct fpga_manager *mgr, u32 flags)
|
||||
static int zynq_fpga_ops_write_complete(struct fpga_manager *mgr,
|
||||
struct fpga_image_info *info)
|
||||
{
|
||||
struct zynq_fpga_priv *priv = mgr->priv;
|
||||
int err;
|
||||
|
@ -364,7 +365,7 @@ static int zynq_fpga_ops_write_complete(struct fpga_manager *mgr, u32 flags)
|
|||
return err;
|
||||
|
||||
/* for the partial reconfig case we didn't touch the level shifters */
|
||||
if (!(flags & FPGA_MGR_PARTIAL_RECONFIG)) {
|
||||
if (!(info->flags & FPGA_MGR_PARTIAL_RECONFIG)) {
|
||||
/* enable level shifters from PL to PS */
|
||||
regmap_write(priv->slcr, SLCR_LVL_SHFTR_EN_OFFSET,
|
||||
LVL_SHFTR_ENABLE_PL_TO_PS);
|
||||
|
@ -416,8 +417,6 @@ static int zynq_fpga_probe(struct platform_device *pdev)
|
|||
if (!priv)
|
||||
return -ENOMEM;
|
||||
|
||||
priv->dev = dev;
|
||||
|
||||
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
|
||||
priv->io_base = devm_ioremap_resource(dev, res);
|
||||
if (IS_ERR(priv->io_base))
|
||||
|
@ -426,7 +425,7 @@ static int zynq_fpga_probe(struct platform_device *pdev)
|
|||
priv->slcr = syscon_regmap_lookup_by_phandle(dev->of_node,
|
||||
"syscon");
|
||||
if (IS_ERR(priv->slcr)) {
|
||||
dev_err(dev, "unable to get zynq-slcr regmap");
|
||||
dev_err(dev, "unable to get zynq-slcr regmap\n");
|
||||
return PTR_ERR(priv->slcr);
|
||||
}
|
||||
|
||||
|
@ -434,38 +433,41 @@ static int zynq_fpga_probe(struct platform_device *pdev)
|
|||
|
||||
priv->irq = platform_get_irq(pdev, 0);
|
||||
if (priv->irq < 0) {
|
||||
dev_err(dev, "No IRQ available");
|
||||
dev_err(dev, "No IRQ available\n");
|
||||
return priv->irq;
|
||||
}
|
||||
|
||||
err = devm_request_irq(dev, priv->irq, zynq_fpga_isr, 0,
|
||||
dev_name(dev), priv);
|
||||
if (err) {
|
||||
dev_err(dev, "unable to request IRQ");
|
||||
return err;
|
||||
}
|
||||
|
||||
priv->clk = devm_clk_get(dev, "ref_clk");
|
||||
if (IS_ERR(priv->clk)) {
|
||||
dev_err(dev, "input clock not found");
|
||||
dev_err(dev, "input clock not found\n");
|
||||
return PTR_ERR(priv->clk);
|
||||
}
|
||||
|
||||
err = clk_prepare_enable(priv->clk);
|
||||
if (err) {
|
||||
dev_err(dev, "unable to enable clock");
|
||||
dev_err(dev, "unable to enable clock\n");
|
||||
return err;
|
||||
}
|
||||
|
||||
/* unlock the device */
|
||||
zynq_fpga_write(priv, UNLOCK_OFFSET, UNLOCK_MASK);
|
||||
|
||||
zynq_fpga_write(priv, INT_MASK_OFFSET, 0xFFFFFFFF);
|
||||
zynq_fpga_write(priv, INT_STS_OFFSET, IXR_ALL_MASK);
|
||||
err = devm_request_irq(dev, priv->irq, zynq_fpga_isr, 0, dev_name(dev),
|
||||
priv);
|
||||
if (err) {
|
||||
dev_err(dev, "unable to request IRQ\n");
|
||||
clk_disable_unprepare(priv->clk);
|
||||
return err;
|
||||
}
|
||||
|
||||
clk_disable(priv->clk);
|
||||
|
||||
err = fpga_mgr_register(dev, "Xilinx Zynq FPGA Manager",
|
||||
&zynq_fpga_ops, priv);
|
||||
if (err) {
|
||||
dev_err(dev, "unable to register FPGA manager");
|
||||
dev_err(dev, "unable to register FPGA manager\n");
|
||||
clk_unprepare(priv->clk);
|
||||
return err;
|
||||
}
|
||||
|
|
|
@ -39,7 +39,7 @@
|
|||
* vmbus_setevent- Trigger an event notification on the specified
|
||||
* channel.
|
||||
*/
|
||||
static void vmbus_setevent(struct vmbus_channel *channel)
|
||||
void vmbus_setevent(struct vmbus_channel *channel)
|
||||
{
|
||||
struct hv_monitor_page *monitorpage;
|
||||
|
||||
|
@ -65,6 +65,7 @@ static void vmbus_setevent(struct vmbus_channel *channel)
|
|||
vmbus_set_event(channel);
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(vmbus_setevent);
|
||||
|
||||
/*
|
||||
* vmbus_open - Open the specified channel.
|
||||
|
@ -635,8 +636,6 @@ int vmbus_sendpacket_ctl(struct vmbus_channel *channel, void *buffer,
|
|||
u32 packetlen_aligned = ALIGN(packetlen, sizeof(u64));
|
||||
struct kvec bufferlist[3];
|
||||
u64 aligned_data = 0;
|
||||
int ret;
|
||||
bool signal = false;
|
||||
bool lock = channel->acquire_ring_lock;
|
||||
int num_vecs = ((bufferlen != 0) ? 3 : 1);
|
||||
|
||||
|
@ -656,33 +655,9 @@ int vmbus_sendpacket_ctl(struct vmbus_channel *channel, void *buffer,
|
|||
bufferlist[2].iov_base = &aligned_data;
|
||||
bufferlist[2].iov_len = (packetlen_aligned - packetlen);
|
||||
|
||||
ret = hv_ringbuffer_write(&channel->outbound, bufferlist, num_vecs,
|
||||
&signal, lock, channel->signal_policy);
|
||||
return hv_ringbuffer_write(channel, bufferlist, num_vecs,
|
||||
lock, kick_q);
|
||||
|
||||
/*
|
||||
* Signalling the host is conditional on many factors:
|
||||
* 1. The ring state changed from being empty to non-empty.
|
||||
* This is tracked by the variable "signal".
|
||||
* 2. The variable kick_q tracks if more data will be placed
|
||||
* on the ring. We will not signal if more data is
|
||||
* to be placed.
|
||||
*
|
||||
* Based on the channel signal state, we will decide
|
||||
* which signaling policy will be applied.
|
||||
*
|
||||
* If we cannot write to the ring-buffer; signal the host
|
||||
* even if we may not have written anything. This is a rare
|
||||
* enough condition that it should not matter.
|
||||
* NOTE: in this case, the hvsock channel is an exception, because
|
||||
* it looks the host side's hvsock implementation has a throttling
|
||||
* mechanism which can hurt the performance otherwise.
|
||||
*/
|
||||
|
||||
if (((ret == 0) && kick_q && signal) ||
|
||||
(ret && !is_hvsock_channel(channel)))
|
||||
vmbus_setevent(channel);
|
||||
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL(vmbus_sendpacket_ctl);
|
||||
|
||||
|
@ -723,7 +698,6 @@ int vmbus_sendpacket_pagebuffer_ctl(struct vmbus_channel *channel,
|
|||
u32 flags,
|
||||
bool kick_q)
|
||||
{
|
||||
int ret;
|
||||
int i;
|
||||
struct vmbus_channel_packet_page_buffer desc;
|
||||
u32 descsize;
|
||||
|
@ -731,7 +705,6 @@ int vmbus_sendpacket_pagebuffer_ctl(struct vmbus_channel *channel,
|
|||
u32 packetlen_aligned;
|
||||
struct kvec bufferlist[3];
|
||||
u64 aligned_data = 0;
|
||||
bool signal = false;
|
||||
bool lock = channel->acquire_ring_lock;
|
||||
|
||||
if (pagecount > MAX_PAGE_BUFFER_COUNT)
|
||||
|
@ -769,29 +742,8 @@ int vmbus_sendpacket_pagebuffer_ctl(struct vmbus_channel *channel,
|
|||
bufferlist[2].iov_base = &aligned_data;
|
||||
bufferlist[2].iov_len = (packetlen_aligned - packetlen);
|
||||
|
||||
ret = hv_ringbuffer_write(&channel->outbound, bufferlist, 3,
|
||||
&signal, lock, channel->signal_policy);
|
||||
|
||||
/*
|
||||
* Signalling the host is conditional on many factors:
|
||||
* 1. The ring state changed from being empty to non-empty.
|
||||
* This is tracked by the variable "signal".
|
||||
* 2. The variable kick_q tracks if more data will be placed
|
||||
* on the ring. We will not signal if more data is
|
||||
* to be placed.
|
||||
*
|
||||
* Based on the channel signal state, we will decide
|
||||
* which signaling policy will be applied.
|
||||
*
|
||||
* If we cannot write to the ring-buffer; signal the host
|
||||
* even if we may not have written anything. This is a rare
|
||||
* enough condition that it should not matter.
|
||||
*/
|
||||
|
||||
if (((ret == 0) && kick_q && signal) || (ret))
|
||||
vmbus_setevent(channel);
|
||||
|
||||
return ret;
|
||||
return hv_ringbuffer_write(channel, bufferlist, 3,
|
||||
lock, kick_q);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(vmbus_sendpacket_pagebuffer_ctl);
|
||||
|
||||
|
@ -822,12 +774,10 @@ int vmbus_sendpacket_mpb_desc(struct vmbus_channel *channel,
|
|||
u32 desc_size,
|
||||
void *buffer, u32 bufferlen, u64 requestid)
|
||||
{
|
||||
int ret;
|
||||
u32 packetlen;
|
||||
u32 packetlen_aligned;
|
||||
struct kvec bufferlist[3];
|
||||
u64 aligned_data = 0;
|
||||
bool signal = false;
|
||||
bool lock = channel->acquire_ring_lock;
|
||||
|
||||
packetlen = desc_size + bufferlen;
|
||||
|
@ -848,13 +798,8 @@ int vmbus_sendpacket_mpb_desc(struct vmbus_channel *channel,
|
|||
bufferlist[2].iov_base = &aligned_data;
|
||||
bufferlist[2].iov_len = (packetlen_aligned - packetlen);
|
||||
|
||||
ret = hv_ringbuffer_write(&channel->outbound, bufferlist, 3,
|
||||
&signal, lock, channel->signal_policy);
|
||||
|
||||
if (ret == 0 && signal)
|
||||
vmbus_setevent(channel);
|
||||
|
||||
return ret;
|
||||
return hv_ringbuffer_write(channel, bufferlist, 3,
|
||||
lock, true);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(vmbus_sendpacket_mpb_desc);
|
||||
|
||||
|
@ -866,14 +811,12 @@ int vmbus_sendpacket_multipagebuffer(struct vmbus_channel *channel,
|
|||
struct hv_multipage_buffer *multi_pagebuffer,
|
||||
void *buffer, u32 bufferlen, u64 requestid)
|
||||
{
|
||||
int ret;
|
||||
struct vmbus_channel_packet_multipage_buffer desc;
|
||||
u32 descsize;
|
||||
u32 packetlen;
|
||||
u32 packetlen_aligned;
|
||||
struct kvec bufferlist[3];
|
||||
u64 aligned_data = 0;
|
||||
bool signal = false;
|
||||
bool lock = channel->acquire_ring_lock;
|
||||
u32 pfncount = NUM_PAGES_SPANNED(multi_pagebuffer->offset,
|
||||
multi_pagebuffer->len);
|
||||
|
@ -913,13 +856,8 @@ int vmbus_sendpacket_multipagebuffer(struct vmbus_channel *channel,
|
|||
bufferlist[2].iov_base = &aligned_data;
|
||||
bufferlist[2].iov_len = (packetlen_aligned - packetlen);
|
||||
|
||||
ret = hv_ringbuffer_write(&channel->outbound, bufferlist, 3,
|
||||
&signal, lock, channel->signal_policy);
|
||||
|
||||
if (ret == 0 && signal)
|
||||
vmbus_setevent(channel);
|
||||
|
||||
return ret;
|
||||
return hv_ringbuffer_write(channel, bufferlist, 3,
|
||||
lock, true);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(vmbus_sendpacket_multipagebuffer);
|
||||
|
||||
|
@ -941,16 +879,9 @@ __vmbus_recvpacket(struct vmbus_channel *channel, void *buffer,
|
|||
u32 bufferlen, u32 *buffer_actual_len, u64 *requestid,
|
||||
bool raw)
|
||||
{
|
||||
int ret;
|
||||
bool signal = false;
|
||||
return hv_ringbuffer_read(channel, buffer, bufferlen,
|
||||
buffer_actual_len, requestid, raw);
|
||||
|
||||
ret = hv_ringbuffer_read(&channel->inbound, buffer, bufferlen,
|
||||
buffer_actual_len, requestid, &signal, raw);
|
||||
|
||||
if (signal)
|
||||
vmbus_setevent(channel);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
int vmbus_recvpacket(struct vmbus_channel *channel, void *buffer,
|
||||
|
|
|
@ -134,7 +134,7 @@ static const struct vmbus_device vmbus_devs[] = {
|
|||
},
|
||||
|
||||
/* Unknown GUID */
|
||||
{ .dev_type = HV_UNKOWN,
|
||||
{ .dev_type = HV_UNKNOWN,
|
||||
.perf_device = false,
|
||||
},
|
||||
};
|
||||
|
@ -163,9 +163,9 @@ static u16 hv_get_dev_type(const struct vmbus_channel *channel)
|
|||
u16 i;
|
||||
|
||||
if (is_hvsock_channel(channel) || is_unsupported_vmbus_devs(guid))
|
||||
return HV_UNKOWN;
|
||||
return HV_UNKNOWN;
|
||||
|
||||
for (i = HV_IDE; i < HV_UNKOWN; i++) {
|
||||
for (i = HV_IDE; i < HV_UNKNOWN; i++) {
|
||||
if (!uuid_le_cmp(*guid, vmbus_devs[i].guid))
|
||||
return i;
|
||||
}
|
||||
|
@ -389,6 +389,7 @@ void vmbus_free_channels(void)
|
|||
{
|
||||
struct vmbus_channel *channel, *tmp;
|
||||
|
||||
mutex_lock(&vmbus_connection.channel_mutex);
|
||||
list_for_each_entry_safe(channel, tmp, &vmbus_connection.chn_list,
|
||||
listentry) {
|
||||
/* hv_process_channel_removal() needs this */
|
||||
|
@ -396,6 +397,7 @@ void vmbus_free_channels(void)
|
|||
|
||||
vmbus_device_unregister(channel->device_obj);
|
||||
}
|
||||
mutex_unlock(&vmbus_connection.channel_mutex);
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -447,8 +449,6 @@ static void vmbus_process_offer(struct vmbus_channel *newchannel)
|
|||
}
|
||||
|
||||
dev_type = hv_get_dev_type(newchannel);
|
||||
if (dev_type == HV_NIC)
|
||||
set_channel_signal_state(newchannel, HV_SIGNAL_POLICY_EXPLICIT);
|
||||
|
||||
init_vp_index(newchannel, dev_type);
|
||||
|
||||
|
|
|
@ -39,6 +39,7 @@ struct vmbus_connection vmbus_connection = {
|
|||
.conn_state = DISCONNECTED,
|
||||
.next_gpadl_handle = ATOMIC_INIT(0xE1E10),
|
||||
};
|
||||
EXPORT_SYMBOL_GPL(vmbus_connection);
|
||||
|
||||
/*
|
||||
* Negotiated protocol version with the host.
|
||||
|
|
|
@ -575,7 +575,7 @@ void hv_synic_clockevents_cleanup(void)
|
|||
if (!(ms_hyperv.features & HV_X64_MSR_SYNTIMER_AVAILABLE))
|
||||
return;
|
||||
|
||||
for_each_online_cpu(cpu)
|
||||
for_each_present_cpu(cpu)
|
||||
clockevents_unbind_device(hv_context.clk_evt[cpu], cpu);
|
||||
}
|
||||
|
||||
|
@ -594,8 +594,10 @@ void hv_synic_cleanup(void *arg)
|
|||
return;
|
||||
|
||||
/* Turn off clockevent device */
|
||||
if (ms_hyperv.features & HV_X64_MSR_SYNTIMER_AVAILABLE)
|
||||
if (ms_hyperv.features & HV_X64_MSR_SYNTIMER_AVAILABLE) {
|
||||
clockevents_unbind_device(hv_context.clk_evt[cpu], cpu);
|
||||
hv_ce_shutdown(hv_context.clk_evt[cpu]);
|
||||
}
|
||||
|
||||
rdmsrl(HV_X64_MSR_SINT0 + VMBUS_MESSAGE_SINT, shared_sint.as_uint64);
|
||||
|
||||
|
|
|
@ -564,6 +564,11 @@ struct hv_dynmem_device {
|
|||
* next version to try.
|
||||
*/
|
||||
__u32 next_version;
|
||||
|
||||
/*
|
||||
* The negotiated version agreed by host.
|
||||
*/
|
||||
__u32 version;
|
||||
};
|
||||
|
||||
static struct hv_dynmem_device dm_device;
|
||||
|
@ -645,6 +650,7 @@ static void hv_bring_pgs_online(struct hv_hotadd_state *has,
|
|||
{
|
||||
int i;
|
||||
|
||||
pr_debug("Online %lu pages starting at pfn 0x%lx\n", size, start_pfn);
|
||||
for (i = 0; i < size; i++)
|
||||
hv_page_online_one(has, pfn_to_page(start_pfn + i));
|
||||
}
|
||||
|
@ -685,7 +691,7 @@ static void hv_mem_hot_add(unsigned long start, unsigned long size,
|
|||
(HA_CHUNK << PAGE_SHIFT));
|
||||
|
||||
if (ret) {
|
||||
pr_info("hot_add memory failed error is %d\n", ret);
|
||||
pr_warn("hot_add memory failed error is %d\n", ret);
|
||||
if (ret == -EEXIST) {
|
||||
/*
|
||||
* This error indicates that the error
|
||||
|
@ -814,6 +820,9 @@ static unsigned long handle_pg_range(unsigned long pg_start,
|
|||
unsigned long old_covered_state;
|
||||
unsigned long res = 0, flags;
|
||||
|
||||
pr_debug("Hot adding %lu pages starting at pfn 0x%lx.\n", pg_count,
|
||||
pg_start);
|
||||
|
||||
spin_lock_irqsave(&dm_device.ha_lock, flags);
|
||||
list_for_each_entry(has, &dm_device.ha_region_list, list) {
|
||||
/*
|
||||
|
@ -1025,8 +1034,13 @@ static void process_info(struct hv_dynmem_device *dm, struct dm_info_msg *msg)
|
|||
|
||||
switch (info_hdr->type) {
|
||||
case INFO_TYPE_MAX_PAGE_CNT:
|
||||
pr_info("Received INFO_TYPE_MAX_PAGE_CNT\n");
|
||||
pr_info("Data Size is %d\n", info_hdr->data_size);
|
||||
if (info_hdr->data_size == sizeof(__u64)) {
|
||||
__u64 *max_page_count = (__u64 *)&info_hdr[1];
|
||||
|
||||
pr_info("INFO_TYPE_MAX_PAGE_CNT = %llu\n",
|
||||
*max_page_count);
|
||||
}
|
||||
|
||||
break;
|
||||
default:
|
||||
pr_info("Received Unknown type: %d\n", info_hdr->type);
|
||||
|
@ -1196,8 +1210,6 @@ static unsigned int alloc_balloon_pages(struct hv_dynmem_device *dm,
|
|||
return num_pages;
|
||||
}
|
||||
|
||||
|
||||
|
||||
static void balloon_up(struct work_struct *dummy)
|
||||
{
|
||||
unsigned int num_pages = dm_device.balloon_wrk.num_pages;
|
||||
|
@ -1224,6 +1236,10 @@ static void balloon_up(struct work_struct *dummy)
|
|||
|
||||
/* Refuse to balloon below the floor, keep the 2M granularity. */
|
||||
if (avail_pages < num_pages || avail_pages - num_pages < floor) {
|
||||
pr_warn("Balloon request will be partially fulfilled. %s\n",
|
||||
avail_pages < num_pages ? "Not enough memory." :
|
||||
"Balloon floor reached.");
|
||||
|
||||
num_pages = avail_pages > floor ? (avail_pages - floor) : 0;
|
||||
num_pages -= num_pages % PAGES_IN_2M;
|
||||
}
|
||||
|
@ -1245,6 +1261,9 @@ static void balloon_up(struct work_struct *dummy)
|
|||
}
|
||||
|
||||
if (num_ballooned == 0 || num_ballooned == num_pages) {
|
||||
pr_debug("Ballooned %u out of %u requested pages.\n",
|
||||
num_pages, dm_device.balloon_wrk.num_pages);
|
||||
|
||||
bl_resp->more_pages = 0;
|
||||
done = true;
|
||||
dm_device.state = DM_INITIALIZED;
|
||||
|
@ -1292,12 +1311,16 @@ static void balloon_down(struct hv_dynmem_device *dm,
|
|||
int range_count = req->range_count;
|
||||
struct dm_unballoon_response resp;
|
||||
int i;
|
||||
unsigned int prev_pages_ballooned = dm->num_pages_ballooned;
|
||||
|
||||
for (i = 0; i < range_count; i++) {
|
||||
free_balloon_pages(dm, &range_array[i]);
|
||||
complete(&dm_device.config_event);
|
||||
}
|
||||
|
||||
pr_debug("Freed %u ballooned pages.\n",
|
||||
prev_pages_ballooned - dm->num_pages_ballooned);
|
||||
|
||||
if (req->more_pages == 1)
|
||||
return;
|
||||
|
||||
|
@ -1365,6 +1388,7 @@ static void version_resp(struct hv_dynmem_device *dm,
|
|||
version_req.hdr.size = sizeof(struct dm_version_request);
|
||||
version_req.hdr.trans_id = atomic_inc_return(&trans_id);
|
||||
version_req.version.version = dm->next_version;
|
||||
dm->version = version_req.version.version;
|
||||
|
||||
/*
|
||||
* Set the next version to try in case current version fails.
|
||||
|
@ -1501,7 +1525,11 @@ static int balloon_probe(struct hv_device *dev,
|
|||
struct dm_version_request version_req;
|
||||
struct dm_capabilities cap_msg;
|
||||
|
||||
#ifdef CONFIG_MEMORY_HOTPLUG
|
||||
do_hot_add = hot_add;
|
||||
#else
|
||||
do_hot_add = false;
|
||||
#endif
|
||||
|
||||
/*
|
||||
* First allocate a send buffer.
|
||||
|
@ -1553,6 +1581,7 @@ static int balloon_probe(struct hv_device *dev,
|
|||
version_req.hdr.trans_id = atomic_inc_return(&trans_id);
|
||||
version_req.version.version = DYNMEM_PROTOCOL_VERSION_WIN10;
|
||||
version_req.is_last_attempt = 0;
|
||||
dm_device.version = version_req.version.version;
|
||||
|
||||
ret = vmbus_sendpacket(dev->channel, &version_req,
|
||||
sizeof(struct dm_version_request),
|
||||
|
@ -1575,6 +1604,11 @@ static int balloon_probe(struct hv_device *dev,
|
|||
ret = -ETIMEDOUT;
|
||||
goto probe_error2;
|
||||
}
|
||||
|
||||
pr_info("Using Dynamic Memory protocol version %u.%u\n",
|
||||
DYNMEM_MAJOR_VERSION(dm_device.version),
|
||||
DYNMEM_MINOR_VERSION(dm_device.version));
|
||||
|
||||
/*
|
||||
* Now submit our capabilities to the host.
|
||||
*/
|
||||
|
|
|
@ -31,7 +31,10 @@
|
|||
#define VSS_MINOR 0
|
||||
#define VSS_VERSION (VSS_MAJOR << 16 | VSS_MINOR)
|
||||
|
||||
#define VSS_USERSPACE_TIMEOUT (msecs_to_jiffies(10 * 1000))
|
||||
/*
|
||||
* Timeout values are based on expecations from host
|
||||
*/
|
||||
#define VSS_FREEZE_TIMEOUT (15 * 60)
|
||||
|
||||
/*
|
||||
* Global state maintained for transaction that is being processed. For a class
|
||||
|
@ -120,7 +123,7 @@ static int vss_handle_handshake(struct hv_vss_msg *vss_msg)
|
|||
default:
|
||||
return -EINVAL;
|
||||
}
|
||||
pr_debug("VSS: userspace daemon ver. %d connected\n", dm_reg_value);
|
||||
pr_info("VSS: userspace daemon ver. %d connected\n", dm_reg_value);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -128,8 +131,10 @@ static int vss_on_msg(void *msg, int len)
|
|||
{
|
||||
struct hv_vss_msg *vss_msg = (struct hv_vss_msg *)msg;
|
||||
|
||||
if (len != sizeof(*vss_msg))
|
||||
if (len != sizeof(*vss_msg)) {
|
||||
pr_debug("VSS: Message size does not match length\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (vss_msg->vss_hdr.operation == VSS_OP_REGISTER ||
|
||||
vss_msg->vss_hdr.operation == VSS_OP_REGISTER1) {
|
||||
|
@ -137,8 +142,11 @@ static int vss_on_msg(void *msg, int len)
|
|||
* Don't process registration messages if we're in the middle
|
||||
* of a transaction processing.
|
||||
*/
|
||||
if (vss_transaction.state > HVUTIL_READY)
|
||||
if (vss_transaction.state > HVUTIL_READY) {
|
||||
pr_debug("VSS: Got unexpected registration request\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
return vss_handle_handshake(vss_msg);
|
||||
} else if (vss_transaction.state == HVUTIL_USERSPACE_REQ) {
|
||||
vss_transaction.state = HVUTIL_USERSPACE_RECV;
|
||||
|
@ -155,7 +163,7 @@ static int vss_on_msg(void *msg, int len)
|
|||
}
|
||||
} else {
|
||||
/* This is a spurious call! */
|
||||
pr_warn("VSS: Transaction not active\n");
|
||||
pr_debug("VSS: Transaction not active\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
return 0;
|
||||
|
@ -168,8 +176,10 @@ static void vss_send_op(void)
|
|||
struct hv_vss_msg *vss_msg;
|
||||
|
||||
/* The transaction state is wrong. */
|
||||
if (vss_transaction.state != HVUTIL_HOSTMSG_RECEIVED)
|
||||
if (vss_transaction.state != HVUTIL_HOSTMSG_RECEIVED) {
|
||||
pr_debug("VSS: Unexpected attempt to send to daemon\n");
|
||||
return;
|
||||
}
|
||||
|
||||
vss_msg = kzalloc(sizeof(*vss_msg), GFP_KERNEL);
|
||||
if (!vss_msg)
|
||||
|
@ -179,7 +189,8 @@ static void vss_send_op(void)
|
|||
|
||||
vss_transaction.state = HVUTIL_USERSPACE_REQ;
|
||||
|
||||
schedule_delayed_work(&vss_timeout_work, VSS_USERSPACE_TIMEOUT);
|
||||
schedule_delayed_work(&vss_timeout_work, op == VSS_OP_FREEZE ?
|
||||
VSS_FREEZE_TIMEOUT * HZ : HV_UTIL_TIMEOUT * HZ);
|
||||
|
||||
rc = hvutil_transport_send(hvt, vss_msg, sizeof(*vss_msg), NULL);
|
||||
if (rc) {
|
||||
|
@ -210,9 +221,13 @@ static void vss_handle_request(struct work_struct *dummy)
|
|||
case VSS_OP_HOT_BACKUP:
|
||||
if (vss_transaction.state < HVUTIL_READY) {
|
||||
/* Userspace is not registered yet */
|
||||
pr_debug("VSS: Not ready for request.\n");
|
||||
vss_respond_to_host(HV_E_FAIL);
|
||||
return;
|
||||
}
|
||||
|
||||
pr_debug("VSS: Received request for op code: %d\n",
|
||||
vss_transaction.msg->vss_hdr.operation);
|
||||
vss_transaction.state = HVUTIL_HOSTMSG_RECEIVED;
|
||||
vss_send_op();
|
||||
return;
|
||||
|
@ -353,8 +368,10 @@ hv_vss_init(struct hv_util_service *srv)
|
|||
|
||||
hvt = hvutil_transport_init(vss_devname, CN_VSS_IDX, CN_VSS_VAL,
|
||||
vss_on_msg, vss_on_reset);
|
||||
if (!hvt)
|
||||
if (!hvt) {
|
||||
pr_warn("VSS: Failed to initialize transport\n");
|
||||
return -EFAULT;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -389,17 +389,20 @@ static int util_probe(struct hv_device *dev,
|
|||
ts_srv_version = TS_VERSION_1;
|
||||
hb_srv_version = HB_VERSION_1;
|
||||
break;
|
||||
case(VERSION_WIN10):
|
||||
util_fw_version = UTIL_FW_VERSION;
|
||||
sd_srv_version = SD_VERSION;
|
||||
ts_srv_version = TS_VERSION;
|
||||
hb_srv_version = HB_VERSION;
|
||||
break;
|
||||
default:
|
||||
case VERSION_WIN7:
|
||||
case VERSION_WIN8:
|
||||
case VERSION_WIN8_1:
|
||||
util_fw_version = UTIL_FW_VERSION;
|
||||
sd_srv_version = SD_VERSION;
|
||||
ts_srv_version = TS_VERSION_3;
|
||||
hb_srv_version = HB_VERSION;
|
||||
break;
|
||||
case VERSION_WIN10:
|
||||
default:
|
||||
util_fw_version = UTIL_FW_VERSION;
|
||||
sd_srv_version = SD_VERSION;
|
||||
ts_srv_version = TS_VERSION;
|
||||
hb_srv_version = HB_VERSION;
|
||||
}
|
||||
|
||||
ret = vmbus_open(dev->channel, 4 * PAGE_SIZE, 4 * PAGE_SIZE, NULL, 0,
|
||||
|
|
|
@ -38,7 +38,7 @@
|
|||
/*
|
||||
* Timeout for guest-host handshake for services.
|
||||
*/
|
||||
#define HV_UTIL_NEGO_TIMEOUT 60
|
||||
#define HV_UTIL_NEGO_TIMEOUT 55
|
||||
|
||||
/*
|
||||
* The below CPUID leaves are present if VersionAndFeatures.HypervisorPresent
|
||||
|
@ -527,14 +527,14 @@ int hv_ringbuffer_init(struct hv_ring_buffer_info *ring_info,
|
|||
|
||||
void hv_ringbuffer_cleanup(struct hv_ring_buffer_info *ring_info);
|
||||
|
||||
int hv_ringbuffer_write(struct hv_ring_buffer_info *ring_info,
|
||||
int hv_ringbuffer_write(struct vmbus_channel *channel,
|
||||
struct kvec *kv_list,
|
||||
u32 kv_count, bool *signal, bool lock,
|
||||
enum hv_signal_policy policy);
|
||||
u32 kv_count, bool lock,
|
||||
bool kick_q);
|
||||
|
||||
int hv_ringbuffer_read(struct hv_ring_buffer_info *inring_info,
|
||||
int hv_ringbuffer_read(struct vmbus_channel *channel,
|
||||
void *buffer, u32 buflen, u32 *buffer_actual_len,
|
||||
u64 *requestid, bool *signal, bool raw);
|
||||
u64 *requestid, bool raw);
|
||||
|
||||
void hv_ringbuffer_get_debuginfo(struct hv_ring_buffer_info *ring_info,
|
||||
struct hv_ring_buffer_debug_info *debug_info);
|
||||
|
|
|
@ -66,21 +66,25 @@ u32 hv_end_read(struct hv_ring_buffer_info *rbi)
|
|||
* once the ring buffer is empty, it will clear the
|
||||
* interrupt_mask and re-check to see if new data has
|
||||
* arrived.
|
||||
*
|
||||
* KYS: Oct. 30, 2016:
|
||||
* It looks like Windows hosts have logic to deal with DOS attacks that
|
||||
* can be triggered if it receives interrupts when it is not expecting
|
||||
* the interrupt. The host expects interrupts only when the ring
|
||||
* transitions from empty to non-empty (or full to non full on the guest
|
||||
* to host ring).
|
||||
* So, base the signaling decision solely on the ring state until the
|
||||
* host logic is fixed.
|
||||
*/
|
||||
|
||||
static bool hv_need_to_signal(u32 old_write, struct hv_ring_buffer_info *rbi,
|
||||
enum hv_signal_policy policy)
|
||||
static void hv_signal_on_write(u32 old_write, struct vmbus_channel *channel,
|
||||
bool kick_q)
|
||||
{
|
||||
struct hv_ring_buffer_info *rbi = &channel->outbound;
|
||||
|
||||
virt_mb();
|
||||
if (READ_ONCE(rbi->ring_buffer->interrupt_mask))
|
||||
return false;
|
||||
|
||||
/*
|
||||
* When the client wants to control signaling,
|
||||
* we only honour the host interrupt mask.
|
||||
*/
|
||||
if (policy == HV_SIGNAL_POLICY_EXPLICIT)
|
||||
return true;
|
||||
return;
|
||||
|
||||
/* check interrupt_mask before read_index */
|
||||
virt_rmb();
|
||||
|
@ -89,9 +93,9 @@ static bool hv_need_to_signal(u32 old_write, struct hv_ring_buffer_info *rbi,
|
|||
* ring transitions from being empty to non-empty.
|
||||
*/
|
||||
if (old_write == READ_ONCE(rbi->ring_buffer->read_index))
|
||||
return true;
|
||||
vmbus_setevent(channel);
|
||||
|
||||
return false;
|
||||
return;
|
||||
}
|
||||
|
||||
/* Get the next write location for the specified ring buffer. */
|
||||
|
@ -280,9 +284,9 @@ void hv_ringbuffer_cleanup(struct hv_ring_buffer_info *ring_info)
|
|||
}
|
||||
|
||||
/* Write to the ring buffer. */
|
||||
int hv_ringbuffer_write(struct hv_ring_buffer_info *outring_info,
|
||||
struct kvec *kv_list, u32 kv_count, bool *signal, bool lock,
|
||||
enum hv_signal_policy policy)
|
||||
int hv_ringbuffer_write(struct vmbus_channel *channel,
|
||||
struct kvec *kv_list, u32 kv_count, bool lock,
|
||||
bool kick_q)
|
||||
{
|
||||
int i = 0;
|
||||
u32 bytes_avail_towrite;
|
||||
|
@ -292,6 +296,7 @@ int hv_ringbuffer_write(struct hv_ring_buffer_info *outring_info,
|
|||
u32 old_write;
|
||||
u64 prev_indices = 0;
|
||||
unsigned long flags = 0;
|
||||
struct hv_ring_buffer_info *outring_info = &channel->outbound;
|
||||
|
||||
for (i = 0; i < kv_count; i++)
|
||||
totalbytes_towrite += kv_list[i].iov_len;
|
||||
|
@ -344,13 +349,13 @@ int hv_ringbuffer_write(struct hv_ring_buffer_info *outring_info,
|
|||
if (lock)
|
||||
spin_unlock_irqrestore(&outring_info->ring_lock, flags);
|
||||
|
||||
*signal = hv_need_to_signal(old_write, outring_info, policy);
|
||||
hv_signal_on_write(old_write, channel, kick_q);
|
||||
return 0;
|
||||
}
|
||||
|
||||
int hv_ringbuffer_read(struct hv_ring_buffer_info *inring_info,
|
||||
int hv_ringbuffer_read(struct vmbus_channel *channel,
|
||||
void *buffer, u32 buflen, u32 *buffer_actual_len,
|
||||
u64 *requestid, bool *signal, bool raw)
|
||||
u64 *requestid, bool raw)
|
||||
{
|
||||
u32 bytes_avail_toread;
|
||||
u32 next_read_location = 0;
|
||||
|
@ -359,6 +364,7 @@ int hv_ringbuffer_read(struct hv_ring_buffer_info *inring_info,
|
|||
u32 offset;
|
||||
u32 packetlen;
|
||||
int ret = 0;
|
||||
struct hv_ring_buffer_info *inring_info = &channel->inbound;
|
||||
|
||||
if (buflen <= 0)
|
||||
return -EINVAL;
|
||||
|
@ -416,7 +422,7 @@ int hv_ringbuffer_read(struct hv_ring_buffer_info *inring_info,
|
|||
/* Update the read index */
|
||||
hv_set_next_read_location(inring_info, next_read_location);
|
||||
|
||||
*signal = hv_need_to_signal_on_read(inring_info);
|
||||
hv_signal_on_read(channel);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
|
|
@ -45,6 +45,11 @@
|
|||
#include <linux/random.h>
|
||||
#include "hyperv_vmbus.h"
|
||||
|
||||
struct vmbus_dynid {
|
||||
struct list_head node;
|
||||
struct hv_vmbus_device_id id;
|
||||
};
|
||||
|
||||
static struct acpi_device *hv_acpi_dev;
|
||||
|
||||
static struct completion probe_event;
|
||||
|
@ -500,7 +505,7 @@ static ssize_t device_show(struct device *dev,
|
|||
static DEVICE_ATTR_RO(device);
|
||||
|
||||
/* Set up per device attributes in /sys/bus/vmbus/devices/<bus device> */
|
||||
static struct attribute *vmbus_attrs[] = {
|
||||
static struct attribute *vmbus_dev_attrs[] = {
|
||||
&dev_attr_id.attr,
|
||||
&dev_attr_state.attr,
|
||||
&dev_attr_monitor_id.attr,
|
||||
|
@ -528,7 +533,7 @@ static struct attribute *vmbus_attrs[] = {
|
|||
&dev_attr_device.attr,
|
||||
NULL,
|
||||
};
|
||||
ATTRIBUTE_GROUPS(vmbus);
|
||||
ATTRIBUTE_GROUPS(vmbus_dev);
|
||||
|
||||
/*
|
||||
* vmbus_uevent - add uevent for our device
|
||||
|
@ -565,10 +570,29 @@ static inline bool is_null_guid(const uuid_le *guid)
|
|||
* Return a matching hv_vmbus_device_id pointer.
|
||||
* If there is no match, return NULL.
|
||||
*/
|
||||
static const struct hv_vmbus_device_id *hv_vmbus_get_id(
|
||||
const struct hv_vmbus_device_id *id,
|
||||
static const struct hv_vmbus_device_id *hv_vmbus_get_id(struct hv_driver *drv,
|
||||
const uuid_le *guid)
|
||||
{
|
||||
const struct hv_vmbus_device_id *id = NULL;
|
||||
struct vmbus_dynid *dynid;
|
||||
|
||||
/* Look at the dynamic ids first, before the static ones */
|
||||
spin_lock(&drv->dynids.lock);
|
||||
list_for_each_entry(dynid, &drv->dynids.list, node) {
|
||||
if (!uuid_le_cmp(dynid->id.guid, *guid)) {
|
||||
id = &dynid->id;
|
||||
break;
|
||||
}
|
||||
}
|
||||
spin_unlock(&drv->dynids.lock);
|
||||
|
||||
if (id)
|
||||
return id;
|
||||
|
||||
id = drv->id_table;
|
||||
if (id == NULL)
|
||||
return NULL; /* empty device table */
|
||||
|
||||
for (; !is_null_guid(&id->guid); id++)
|
||||
if (!uuid_le_cmp(id->guid, *guid))
|
||||
return id;
|
||||
|
@ -576,6 +600,134 @@ static const struct hv_vmbus_device_id *hv_vmbus_get_id(
|
|||
return NULL;
|
||||
}
|
||||
|
||||
/* vmbus_add_dynid - add a new device ID to this driver and re-probe devices */
|
||||
static int vmbus_add_dynid(struct hv_driver *drv, uuid_le *guid)
|
||||
{
|
||||
struct vmbus_dynid *dynid;
|
||||
|
||||
dynid = kzalloc(sizeof(*dynid), GFP_KERNEL);
|
||||
if (!dynid)
|
||||
return -ENOMEM;
|
||||
|
||||
dynid->id.guid = *guid;
|
||||
|
||||
spin_lock(&drv->dynids.lock);
|
||||
list_add_tail(&dynid->node, &drv->dynids.list);
|
||||
spin_unlock(&drv->dynids.lock);
|
||||
|
||||
return driver_attach(&drv->driver);
|
||||
}
|
||||
|
||||
static void vmbus_free_dynids(struct hv_driver *drv)
|
||||
{
|
||||
struct vmbus_dynid *dynid, *n;
|
||||
|
||||
spin_lock(&drv->dynids.lock);
|
||||
list_for_each_entry_safe(dynid, n, &drv->dynids.list, node) {
|
||||
list_del(&dynid->node);
|
||||
kfree(dynid);
|
||||
}
|
||||
spin_unlock(&drv->dynids.lock);
|
||||
}
|
||||
|
||||
/* Parse string of form: 1b4e28ba-2fa1-11d2-883f-b9a761bde3f */
|
||||
static int get_uuid_le(const char *str, uuid_le *uu)
|
||||
{
|
||||
unsigned int b[16];
|
||||
int i;
|
||||
|
||||
if (strlen(str) < 37)
|
||||
return -1;
|
||||
|
||||
for (i = 0; i < 36; i++) {
|
||||
switch (i) {
|
||||
case 8: case 13: case 18: case 23:
|
||||
if (str[i] != '-')
|
||||
return -1;
|
||||
break;
|
||||
default:
|
||||
if (!isxdigit(str[i]))
|
||||
return -1;
|
||||
}
|
||||
}
|
||||
|
||||
/* unparse little endian output byte order */
|
||||
if (sscanf(str,
|
||||
"%2x%2x%2x%2x-%2x%2x-%2x%2x-%2x%2x-%2x%2x%2x%2x%2x%2x",
|
||||
&b[3], &b[2], &b[1], &b[0],
|
||||
&b[5], &b[4], &b[7], &b[6], &b[8], &b[9],
|
||||
&b[10], &b[11], &b[12], &b[13], &b[14], &b[15]) != 16)
|
||||
return -1;
|
||||
|
||||
for (i = 0; i < 16; i++)
|
||||
uu->b[i] = b[i];
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* store_new_id - sysfs frontend to vmbus_add_dynid()
|
||||
*
|
||||
* Allow GUIDs to be added to an existing driver via sysfs.
|
||||
*/
|
||||
static ssize_t new_id_store(struct device_driver *driver, const char *buf,
|
||||
size_t count)
|
||||
{
|
||||
struct hv_driver *drv = drv_to_hv_drv(driver);
|
||||
uuid_le guid = NULL_UUID_LE;
|
||||
ssize_t retval;
|
||||
|
||||
if (get_uuid_le(buf, &guid) != 0)
|
||||
return -EINVAL;
|
||||
|
||||
if (hv_vmbus_get_id(drv, &guid))
|
||||
return -EEXIST;
|
||||
|
||||
retval = vmbus_add_dynid(drv, &guid);
|
||||
if (retval)
|
||||
return retval;
|
||||
return count;
|
||||
}
|
||||
static DRIVER_ATTR_WO(new_id);
|
||||
|
||||
/*
|
||||
* store_remove_id - remove a PCI device ID from this driver
|
||||
*
|
||||
* Removes a dynamic pci device ID to this driver.
|
||||
*/
|
||||
static ssize_t remove_id_store(struct device_driver *driver, const char *buf,
|
||||
size_t count)
|
||||
{
|
||||
struct hv_driver *drv = drv_to_hv_drv(driver);
|
||||
struct vmbus_dynid *dynid, *n;
|
||||
uuid_le guid = NULL_UUID_LE;
|
||||
size_t retval = -ENODEV;
|
||||
|
||||
if (get_uuid_le(buf, &guid))
|
||||
return -EINVAL;
|
||||
|
||||
spin_lock(&drv->dynids.lock);
|
||||
list_for_each_entry_safe(dynid, n, &drv->dynids.list, node) {
|
||||
struct hv_vmbus_device_id *id = &dynid->id;
|
||||
|
||||
if (!uuid_le_cmp(id->guid, guid)) {
|
||||
list_del(&dynid->node);
|
||||
kfree(dynid);
|
||||
retval = count;
|
||||
break;
|
||||
}
|
||||
}
|
||||
spin_unlock(&drv->dynids.lock);
|
||||
|
||||
return retval;
|
||||
}
|
||||
static DRIVER_ATTR_WO(remove_id);
|
||||
|
||||
static struct attribute *vmbus_drv_attrs[] = {
|
||||
&driver_attr_new_id.attr,
|
||||
&driver_attr_remove_id.attr,
|
||||
NULL,
|
||||
};
|
||||
ATTRIBUTE_GROUPS(vmbus_drv);
|
||||
|
||||
|
||||
/*
|
||||
|
@ -590,7 +742,7 @@ static int vmbus_match(struct device *device, struct device_driver *driver)
|
|||
if (is_hvsock_channel(hv_dev->channel))
|
||||
return drv->hvsock;
|
||||
|
||||
if (hv_vmbus_get_id(drv->id_table, &hv_dev->dev_type))
|
||||
if (hv_vmbus_get_id(drv, &hv_dev->dev_type))
|
||||
return 1;
|
||||
|
||||
return 0;
|
||||
|
@ -607,7 +759,7 @@ static int vmbus_probe(struct device *child_device)
|
|||
struct hv_device *dev = device_to_hv_device(child_device);
|
||||
const struct hv_vmbus_device_id *dev_id;
|
||||
|
||||
dev_id = hv_vmbus_get_id(drv->id_table, &dev->dev_type);
|
||||
dev_id = hv_vmbus_get_id(drv, &dev->dev_type);
|
||||
if (drv->probe) {
|
||||
ret = drv->probe(dev, dev_id);
|
||||
if (ret != 0)
|
||||
|
@ -684,7 +836,8 @@ static struct bus_type hv_bus = {
|
|||
.remove = vmbus_remove,
|
||||
.probe = vmbus_probe,
|
||||
.uevent = vmbus_uevent,
|
||||
.dev_groups = vmbus_groups,
|
||||
.dev_groups = vmbus_dev_groups,
|
||||
.drv_groups = vmbus_drv_groups,
|
||||
};
|
||||
|
||||
struct onmessage_work_context {
|
||||
|
@ -905,6 +1058,9 @@ int __vmbus_driver_register(struct hv_driver *hv_driver, struct module *owner, c
|
|||
hv_driver->driver.mod_name = mod_name;
|
||||
hv_driver->driver.bus = &hv_bus;
|
||||
|
||||
spin_lock_init(&hv_driver->dynids.lock);
|
||||
INIT_LIST_HEAD(&hv_driver->dynids.list);
|
||||
|
||||
ret = driver_register(&hv_driver->driver);
|
||||
|
||||
return ret;
|
||||
|
@ -923,8 +1079,10 @@ void vmbus_driver_unregister(struct hv_driver *hv_driver)
|
|||
{
|
||||
pr_info("unregistering driver %s\n", hv_driver->name);
|
||||
|
||||
if (!vmbus_exists())
|
||||
if (!vmbus_exists()) {
|
||||
driver_unregister(&hv_driver->driver);
|
||||
vmbus_free_dynids(hv_driver);
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(vmbus_driver_unregister);
|
||||
|
||||
|
|
|
@ -202,6 +202,21 @@ static void *etm_setup_aux(int event_cpu, void **pages,
|
|||
if (!event_data)
|
||||
return NULL;
|
||||
|
||||
/*
|
||||
* In theory nothing prevent tracers in a trace session from being
|
||||
* associated with different sinks, nor having a sink per tracer. But
|
||||
* until we have HW with this kind of topology we need to assume tracers
|
||||
* in a trace session are using the same sink. Therefore go through
|
||||
* the coresight bus and pick the first enabled sink.
|
||||
*
|
||||
* When operated from sysFS users are responsible to enable the sink
|
||||
* while from perf, the perf tools will do it based on the choice made
|
||||
* on the cmd line. As such the "enable_sink" flag in sysFS is reset.
|
||||
*/
|
||||
sink = coresight_get_enabled_sink(true);
|
||||
if (!sink)
|
||||
goto err;
|
||||
|
||||
INIT_WORK(&event_data->work, free_event_data);
|
||||
|
||||
mask = &event_data->mask;
|
||||
|
@ -219,25 +234,11 @@ static void *etm_setup_aux(int event_cpu, void **pages,
|
|||
* list of devices from source to sink that can be
|
||||
* referenced later when the path is actually needed.
|
||||
*/
|
||||
event_data->path[cpu] = coresight_build_path(csdev);
|
||||
event_data->path[cpu] = coresight_build_path(csdev, sink);
|
||||
if (IS_ERR(event_data->path[cpu]))
|
||||
goto err;
|
||||
}
|
||||
|
||||
/*
|
||||
* In theory nothing prevent tracers in a trace session from being
|
||||
* associated with different sinks, nor having a sink per tracer. But
|
||||
* until we have HW with this kind of topology and a way to convey
|
||||
* sink assignement from the perf cmd line we need to assume tracers
|
||||
* in a trace session are using the same sink. Therefore pick the sink
|
||||
* found at the end of the first available path.
|
||||
*/
|
||||
cpu = cpumask_first(mask);
|
||||
/* Grab the sink at the end of the path */
|
||||
sink = coresight_get_sink(event_data->path[cpu]);
|
||||
if (!sink)
|
||||
goto err;
|
||||
|
||||
if (!sink_ops(sink)->alloc_buffer)
|
||||
goto err;
|
||||
|
||||
|
|
|
@ -89,11 +89,13 @@
|
|||
/* ETMCR - 0x00 */
|
||||
#define ETMCR_PWD_DWN BIT(0)
|
||||
#define ETMCR_STALL_MODE BIT(7)
|
||||
#define ETMCR_BRANCH_BROADCAST BIT(8)
|
||||
#define ETMCR_ETM_PRG BIT(10)
|
||||
#define ETMCR_ETM_EN BIT(11)
|
||||
#define ETMCR_CYC_ACC BIT(12)
|
||||
#define ETMCR_CTXID_SIZE (BIT(14)|BIT(15))
|
||||
#define ETMCR_TIMESTAMP_EN BIT(28)
|
||||
#define ETMCR_RETURN_STACK BIT(29)
|
||||
/* ETMCCR - 0x04 */
|
||||
#define ETMCCR_FIFOFULL BIT(23)
|
||||
/* ETMPDCR - 0x310 */
|
||||
|
@ -110,8 +112,11 @@
|
|||
#define ETM_MODE_STALL BIT(2)
|
||||
#define ETM_MODE_TIMESTAMP BIT(3)
|
||||
#define ETM_MODE_CTXID BIT(4)
|
||||
#define ETM_MODE_BBROAD BIT(5)
|
||||
#define ETM_MODE_RET_STACK BIT(6)
|
||||
#define ETM_MODE_ALL (ETM_MODE_EXCLUDE | ETM_MODE_CYCACC | \
|
||||
ETM_MODE_STALL | ETM_MODE_TIMESTAMP | \
|
||||
ETM_MODE_BBROAD | ETM_MODE_RET_STACK | \
|
||||
ETM_MODE_CTXID | ETM_MODE_EXCL_KERN | \
|
||||
ETM_MODE_EXCL_USER)
|
||||
|
||||
|
|
|
@ -146,7 +146,7 @@ static ssize_t mode_store(struct device *dev,
|
|||
goto err_unlock;
|
||||
}
|
||||
config->ctrl |= ETMCR_STALL_MODE;
|
||||
} else
|
||||
} else
|
||||
config->ctrl &= ~ETMCR_STALL_MODE;
|
||||
|
||||
if (config->mode & ETM_MODE_TIMESTAMP) {
|
||||
|
@ -164,6 +164,16 @@ static ssize_t mode_store(struct device *dev,
|
|||
else
|
||||
config->ctrl &= ~ETMCR_CTXID_SIZE;
|
||||
|
||||
if (config->mode & ETM_MODE_BBROAD)
|
||||
config->ctrl |= ETMCR_BRANCH_BROADCAST;
|
||||
else
|
||||
config->ctrl &= ~ETMCR_BRANCH_BROADCAST;
|
||||
|
||||
if (config->mode & ETM_MODE_RET_STACK)
|
||||
config->ctrl |= ETMCR_RETURN_STACK;
|
||||
else
|
||||
config->ctrl &= ~ETMCR_RETURN_STACK;
|
||||
|
||||
if (config->mode & (ETM_MODE_EXCL_KERN | ETM_MODE_EXCL_USER))
|
||||
etm_config_trace_mode(config);
|
||||
|
||||
|
|
|
@ -111,7 +111,9 @@ static inline void CS_UNLOCK(void __iomem *addr)
|
|||
void coresight_disable_path(struct list_head *path);
|
||||
int coresight_enable_path(struct list_head *path, u32 mode);
|
||||
struct coresight_device *coresight_get_sink(struct list_head *path);
|
||||
struct list_head *coresight_build_path(struct coresight_device *csdev);
|
||||
struct coresight_device *coresight_get_enabled_sink(bool reset);
|
||||
struct list_head *coresight_build_path(struct coresight_device *csdev,
|
||||
struct coresight_device *sink);
|
||||
void coresight_release_path(struct list_head *path);
|
||||
|
||||
#ifdef CONFIG_CORESIGHT_SOURCE_ETM3X
|
||||
|
|
|
@ -419,10 +419,10 @@ static ssize_t stm_generic_packet(struct stm_data *stm_data,
|
|||
struct stm_drvdata, stm);
|
||||
|
||||
if (!(drvdata && local_read(&drvdata->mode)))
|
||||
return 0;
|
||||
return -EACCES;
|
||||
|
||||
if (channel >= drvdata->numsp)
|
||||
return 0;
|
||||
return -EINVAL;
|
||||
|
||||
ch_addr = (unsigned long)stm_channel_addr(drvdata, channel);
|
||||
|
||||
|
@ -920,6 +920,11 @@ static struct amba_id stm_ids[] = {
|
|||
.mask = 0x0003ffff,
|
||||
.data = "STM32",
|
||||
},
|
||||
{
|
||||
.id = 0x0003b963,
|
||||
.mask = 0x0003ffff,
|
||||
.data = "STM500",
|
||||
},
|
||||
{ 0, 0},
|
||||
};
|
||||
|
||||
|
|
|
@ -70,7 +70,7 @@ static void tmc_etb_disable_hw(struct tmc_drvdata *drvdata)
|
|||
* When operating in sysFS mode the content of the buffer needs to be
|
||||
* read before the TMC is disabled.
|
||||
*/
|
||||
if (local_read(&drvdata->mode) == CS_MODE_SYSFS)
|
||||
if (drvdata->mode == CS_MODE_SYSFS)
|
||||
tmc_etb_dump_hw(drvdata);
|
||||
tmc_disable_hw(drvdata);
|
||||
|
||||
|
@ -103,19 +103,14 @@ static void tmc_etf_disable_hw(struct tmc_drvdata *drvdata)
|
|||
CS_LOCK(drvdata->base);
|
||||
}
|
||||
|
||||
static int tmc_enable_etf_sink_sysfs(struct coresight_device *csdev, u32 mode)
|
||||
static int tmc_enable_etf_sink_sysfs(struct coresight_device *csdev)
|
||||
{
|
||||
int ret = 0;
|
||||
bool used = false;
|
||||
char *buf = NULL;
|
||||
long val;
|
||||
unsigned long flags;
|
||||
struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
|
||||
|
||||
/* This shouldn't be happening */
|
||||
if (WARN_ON(mode != CS_MODE_SYSFS))
|
||||
return -EINVAL;
|
||||
|
||||
/*
|
||||
* If we don't have a buffer release the lock and allocate memory.
|
||||
* Otherwise keep the lock and move along.
|
||||
|
@ -138,13 +133,12 @@ static int tmc_enable_etf_sink_sysfs(struct coresight_device *csdev, u32 mode)
|
|||
goto out;
|
||||
}
|
||||
|
||||
val = local_xchg(&drvdata->mode, mode);
|
||||
/*
|
||||
* In sysFS mode we can have multiple writers per sink. Since this
|
||||
* sink is already enabled no memory is needed and the HW need not be
|
||||
* touched.
|
||||
*/
|
||||
if (val == CS_MODE_SYSFS)
|
||||
if (drvdata->mode == CS_MODE_SYSFS)
|
||||
goto out;
|
||||
|
||||
/*
|
||||
|
@ -163,6 +157,7 @@ static int tmc_enable_etf_sink_sysfs(struct coresight_device *csdev, u32 mode)
|
|||
drvdata->buf = buf;
|
||||
}
|
||||
|
||||
drvdata->mode = CS_MODE_SYSFS;
|
||||
tmc_etb_enable_hw(drvdata);
|
||||
out:
|
||||
spin_unlock_irqrestore(&drvdata->spinlock, flags);
|
||||
|
@ -177,34 +172,29 @@ out:
|
|||
return ret;
|
||||
}
|
||||
|
||||
static int tmc_enable_etf_sink_perf(struct coresight_device *csdev, u32 mode)
|
||||
static int tmc_enable_etf_sink_perf(struct coresight_device *csdev)
|
||||
{
|
||||
int ret = 0;
|
||||
long val;
|
||||
unsigned long flags;
|
||||
struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
|
||||
|
||||
/* This shouldn't be happening */
|
||||
if (WARN_ON(mode != CS_MODE_PERF))
|
||||
return -EINVAL;
|
||||
|
||||
spin_lock_irqsave(&drvdata->spinlock, flags);
|
||||
if (drvdata->reading) {
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
val = local_xchg(&drvdata->mode, mode);
|
||||
/*
|
||||
* In Perf mode there can be only one writer per sink. There
|
||||
* is also no need to continue if the ETB/ETR is already operated
|
||||
* from sysFS.
|
||||
*/
|
||||
if (val != CS_MODE_DISABLED) {
|
||||
if (drvdata->mode != CS_MODE_DISABLED) {
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
drvdata->mode = CS_MODE_PERF;
|
||||
tmc_etb_enable_hw(drvdata);
|
||||
out:
|
||||
spin_unlock_irqrestore(&drvdata->spinlock, flags);
|
||||
|
@ -216,9 +206,9 @@ static int tmc_enable_etf_sink(struct coresight_device *csdev, u32 mode)
|
|||
{
|
||||
switch (mode) {
|
||||
case CS_MODE_SYSFS:
|
||||
return tmc_enable_etf_sink_sysfs(csdev, mode);
|
||||
return tmc_enable_etf_sink_sysfs(csdev);
|
||||
case CS_MODE_PERF:
|
||||
return tmc_enable_etf_sink_perf(csdev, mode);
|
||||
return tmc_enable_etf_sink_perf(csdev);
|
||||
}
|
||||
|
||||
/* We shouldn't be here */
|
||||
|
@ -227,7 +217,6 @@ static int tmc_enable_etf_sink(struct coresight_device *csdev, u32 mode)
|
|||
|
||||
static void tmc_disable_etf_sink(struct coresight_device *csdev)
|
||||
{
|
||||
long val;
|
||||
unsigned long flags;
|
||||
struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
|
||||
|
||||
|
@ -237,10 +226,11 @@ static void tmc_disable_etf_sink(struct coresight_device *csdev)
|
|||
return;
|
||||
}
|
||||
|
||||
val = local_xchg(&drvdata->mode, CS_MODE_DISABLED);
|
||||
/* Disable the TMC only if it needs to */
|
||||
if (val != CS_MODE_DISABLED)
|
||||
if (drvdata->mode != CS_MODE_DISABLED) {
|
||||
tmc_etb_disable_hw(drvdata);
|
||||
drvdata->mode = CS_MODE_DISABLED;
|
||||
}
|
||||
|
||||
spin_unlock_irqrestore(&drvdata->spinlock, flags);
|
||||
|
||||
|
@ -260,7 +250,7 @@ static int tmc_enable_etf_link(struct coresight_device *csdev,
|
|||
}
|
||||
|
||||
tmc_etf_enable_hw(drvdata);
|
||||
local_set(&drvdata->mode, CS_MODE_SYSFS);
|
||||
drvdata->mode = CS_MODE_SYSFS;
|
||||
spin_unlock_irqrestore(&drvdata->spinlock, flags);
|
||||
|
||||
dev_info(drvdata->dev, "TMC-ETF enabled\n");
|
||||
|
@ -280,7 +270,7 @@ static void tmc_disable_etf_link(struct coresight_device *csdev,
|
|||
}
|
||||
|
||||
tmc_etf_disable_hw(drvdata);
|
||||
local_set(&drvdata->mode, CS_MODE_DISABLED);
|
||||
drvdata->mode = CS_MODE_DISABLED;
|
||||
spin_unlock_irqrestore(&drvdata->spinlock, flags);
|
||||
|
||||
dev_info(drvdata->dev, "TMC disabled\n");
|
||||
|
@ -383,7 +373,7 @@ static void tmc_update_etf_buffer(struct coresight_device *csdev,
|
|||
return;
|
||||
|
||||
/* This shouldn't happen */
|
||||
if (WARN_ON_ONCE(local_read(&drvdata->mode) != CS_MODE_PERF))
|
||||
if (WARN_ON_ONCE(drvdata->mode != CS_MODE_PERF))
|
||||
return;
|
||||
|
||||
CS_UNLOCK(drvdata->base);
|
||||
|
@ -504,7 +494,6 @@ const struct coresight_ops tmc_etf_cs_ops = {
|
|||
|
||||
int tmc_read_prepare_etb(struct tmc_drvdata *drvdata)
|
||||
{
|
||||
long val;
|
||||
enum tmc_mode mode;
|
||||
int ret = 0;
|
||||
unsigned long flags;
|
||||
|
@ -528,9 +517,8 @@ int tmc_read_prepare_etb(struct tmc_drvdata *drvdata)
|
|||
goto out;
|
||||
}
|
||||
|
||||
val = local_read(&drvdata->mode);
|
||||
/* Don't interfere if operated from Perf */
|
||||
if (val == CS_MODE_PERF) {
|
||||
if (drvdata->mode == CS_MODE_PERF) {
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
@ -542,7 +530,7 @@ int tmc_read_prepare_etb(struct tmc_drvdata *drvdata)
|
|||
}
|
||||
|
||||
/* Disable the TMC if need be */
|
||||
if (val == CS_MODE_SYSFS)
|
||||
if (drvdata->mode == CS_MODE_SYSFS)
|
||||
tmc_etb_disable_hw(drvdata);
|
||||
|
||||
drvdata->reading = true;
|
||||
|
@ -573,7 +561,7 @@ int tmc_read_unprepare_etb(struct tmc_drvdata *drvdata)
|
|||
}
|
||||
|
||||
/* Re-enable the TMC if need be */
|
||||
if (local_read(&drvdata->mode) == CS_MODE_SYSFS) {
|
||||
if (drvdata->mode == CS_MODE_SYSFS) {
|
||||
/*
|
||||
* The trace run will continue with the same allocated trace
|
||||
* buffer. As such zero-out the buffer so that we don't end
|
||||
|
|
|
@ -86,26 +86,22 @@ static void tmc_etr_disable_hw(struct tmc_drvdata *drvdata)
|
|||
* When operating in sysFS mode the content of the buffer needs to be
|
||||
* read before the TMC is disabled.
|
||||
*/
|
||||
if (local_read(&drvdata->mode) == CS_MODE_SYSFS)
|
||||
if (drvdata->mode == CS_MODE_SYSFS)
|
||||
tmc_etr_dump_hw(drvdata);
|
||||
tmc_disable_hw(drvdata);
|
||||
|
||||
CS_LOCK(drvdata->base);
|
||||
}
|
||||
|
||||
static int tmc_enable_etr_sink_sysfs(struct coresight_device *csdev, u32 mode)
|
||||
static int tmc_enable_etr_sink_sysfs(struct coresight_device *csdev)
|
||||
{
|
||||
int ret = 0;
|
||||
bool used = false;
|
||||
long val;
|
||||
unsigned long flags;
|
||||
void __iomem *vaddr = NULL;
|
||||
dma_addr_t paddr;
|
||||
struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
|
||||
|
||||
/* This shouldn't be happening */
|
||||
if (WARN_ON(mode != CS_MODE_SYSFS))
|
||||
return -EINVAL;
|
||||
|
||||
/*
|
||||
* If we don't have a buffer release the lock and allocate memory.
|
||||
|
@ -134,13 +130,12 @@ static int tmc_enable_etr_sink_sysfs(struct coresight_device *csdev, u32 mode)
|
|||
goto out;
|
||||
}
|
||||
|
||||
val = local_xchg(&drvdata->mode, mode);
|
||||
/*
|
||||
* In sysFS mode we can have multiple writers per sink. Since this
|
||||
* sink is already enabled no memory is needed and the HW need not be
|
||||
* touched.
|
||||
*/
|
||||
if (val == CS_MODE_SYSFS)
|
||||
if (drvdata->mode == CS_MODE_SYSFS)
|
||||
goto out;
|
||||
|
||||
/*
|
||||
|
@ -155,8 +150,7 @@ static int tmc_enable_etr_sink_sysfs(struct coresight_device *csdev, u32 mode)
|
|||
drvdata->buf = drvdata->vaddr;
|
||||
}
|
||||
|
||||
memset(drvdata->vaddr, 0, drvdata->size);
|
||||
|
||||
drvdata->mode = CS_MODE_SYSFS;
|
||||
tmc_etr_enable_hw(drvdata);
|
||||
out:
|
||||
spin_unlock_irqrestore(&drvdata->spinlock, flags);
|
||||
|
@ -171,34 +165,29 @@ out:
|
|||
return ret;
|
||||
}
|
||||
|
||||
static int tmc_enable_etr_sink_perf(struct coresight_device *csdev, u32 mode)
|
||||
static int tmc_enable_etr_sink_perf(struct coresight_device *csdev)
|
||||
{
|
||||
int ret = 0;
|
||||
long val;
|
||||
unsigned long flags;
|
||||
struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
|
||||
|
||||
/* This shouldn't be happening */
|
||||
if (WARN_ON(mode != CS_MODE_PERF))
|
||||
return -EINVAL;
|
||||
|
||||
spin_lock_irqsave(&drvdata->spinlock, flags);
|
||||
if (drvdata->reading) {
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
val = local_xchg(&drvdata->mode, mode);
|
||||
/*
|
||||
* In Perf mode there can be only one writer per sink. There
|
||||
* is also no need to continue if the ETR is already operated
|
||||
* from sysFS.
|
||||
*/
|
||||
if (val != CS_MODE_DISABLED) {
|
||||
if (drvdata->mode != CS_MODE_DISABLED) {
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
drvdata->mode = CS_MODE_PERF;
|
||||
tmc_etr_enable_hw(drvdata);
|
||||
out:
|
||||
spin_unlock_irqrestore(&drvdata->spinlock, flags);
|
||||
|
@ -210,9 +199,9 @@ static int tmc_enable_etr_sink(struct coresight_device *csdev, u32 mode)
|
|||
{
|
||||
switch (mode) {
|
||||
case CS_MODE_SYSFS:
|
||||
return tmc_enable_etr_sink_sysfs(csdev, mode);
|
||||
return tmc_enable_etr_sink_sysfs(csdev);
|
||||
case CS_MODE_PERF:
|
||||
return tmc_enable_etr_sink_perf(csdev, mode);
|
||||
return tmc_enable_etr_sink_perf(csdev);
|
||||
}
|
||||
|
||||
/* We shouldn't be here */
|
||||
|
@ -221,7 +210,6 @@ static int tmc_enable_etr_sink(struct coresight_device *csdev, u32 mode)
|
|||
|
||||
static void tmc_disable_etr_sink(struct coresight_device *csdev)
|
||||
{
|
||||
long val;
|
||||
unsigned long flags;
|
||||
struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
|
||||
|
||||
|
@ -231,10 +219,11 @@ static void tmc_disable_etr_sink(struct coresight_device *csdev)
|
|||
return;
|
||||
}
|
||||
|
||||
val = local_xchg(&drvdata->mode, CS_MODE_DISABLED);
|
||||
/* Disable the TMC only if it needs to */
|
||||
if (val != CS_MODE_DISABLED)
|
||||
if (drvdata->mode != CS_MODE_DISABLED) {
|
||||
tmc_etr_disable_hw(drvdata);
|
||||
drvdata->mode = CS_MODE_DISABLED;
|
||||
}
|
||||
|
||||
spin_unlock_irqrestore(&drvdata->spinlock, flags);
|
||||
|
||||
|
@ -253,7 +242,6 @@ const struct coresight_ops tmc_etr_cs_ops = {
|
|||
int tmc_read_prepare_etr(struct tmc_drvdata *drvdata)
|
||||
{
|
||||
int ret = 0;
|
||||
long val;
|
||||
unsigned long flags;
|
||||
|
||||
/* config types are set a boot time and never change */
|
||||
|
@ -266,9 +254,8 @@ int tmc_read_prepare_etr(struct tmc_drvdata *drvdata)
|
|||
goto out;
|
||||
}
|
||||
|
||||
val = local_read(&drvdata->mode);
|
||||
/* Don't interfere if operated from Perf */
|
||||
if (val == CS_MODE_PERF) {
|
||||
if (drvdata->mode == CS_MODE_PERF) {
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
@ -280,7 +267,7 @@ int tmc_read_prepare_etr(struct tmc_drvdata *drvdata)
|
|||
}
|
||||
|
||||
/* Disable the TMC if need be */
|
||||
if (val == CS_MODE_SYSFS)
|
||||
if (drvdata->mode == CS_MODE_SYSFS)
|
||||
tmc_etr_disable_hw(drvdata);
|
||||
|
||||
drvdata->reading = true;
|
||||
|
@ -303,7 +290,7 @@ int tmc_read_unprepare_etr(struct tmc_drvdata *drvdata)
|
|||
spin_lock_irqsave(&drvdata->spinlock, flags);
|
||||
|
||||
/* RE-enable the TMC if need be */
|
||||
if (local_read(&drvdata->mode) == CS_MODE_SYSFS) {
|
||||
if (drvdata->mode == CS_MODE_SYSFS) {
|
||||
/*
|
||||
* The trace run will continue with the same allocated trace
|
||||
* buffer. The trace buffer is cleared in tmc_etr_enable_hw(),
|
||||
|
|
|
@ -117,7 +117,7 @@ struct tmc_drvdata {
|
|||
void __iomem *vaddr;
|
||||
u32 size;
|
||||
u32 len;
|
||||
local_t mode;
|
||||
u32 mode;
|
||||
enum tmc_config_type config_type;
|
||||
enum tmc_mem_intf_width memwidth;
|
||||
u32 trigger_cntr;
|
||||
|
|
|
@ -368,6 +368,52 @@ struct coresight_device *coresight_get_sink(struct list_head *path)
|
|||
return csdev;
|
||||
}
|
||||
|
||||
static int coresight_enabled_sink(struct device *dev, void *data)
|
||||
{
|
||||
bool *reset = data;
|
||||
struct coresight_device *csdev = to_coresight_device(dev);
|
||||
|
||||
if ((csdev->type == CORESIGHT_DEV_TYPE_SINK ||
|
||||
csdev->type == CORESIGHT_DEV_TYPE_LINKSINK) &&
|
||||
csdev->activated) {
|
||||
/*
|
||||
* Now that we have a handle on the sink for this session,
|
||||
* disable the sysFS "enable_sink" flag so that possible
|
||||
* concurrent perf session that wish to use another sink don't
|
||||
* trip on it. Doing so has no ramification for the current
|
||||
* session.
|
||||
*/
|
||||
if (*reset)
|
||||
csdev->activated = false;
|
||||
|
||||
return 1;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* coresight_get_enabled_sink - returns the first enabled sink found on the bus
|
||||
* @deactivate: Whether the 'enable_sink' flag should be reset
|
||||
*
|
||||
* When operated from perf the deactivate parameter should be set to 'true'.
|
||||
* That way the "enabled_sink" flag of the sink that was selected can be reset,
|
||||
* allowing for other concurrent perf sessions to choose a different sink.
|
||||
*
|
||||
* When operated from sysFS users have full control and as such the deactivate
|
||||
* parameter should be set to 'false', hence mandating users to explicitly
|
||||
* clear the flag.
|
||||
*/
|
||||
struct coresight_device *coresight_get_enabled_sink(bool deactivate)
|
||||
{
|
||||
struct device *dev = NULL;
|
||||
|
||||
dev = bus_find_device(&coresight_bustype, NULL, &deactivate,
|
||||
coresight_enabled_sink);
|
||||
|
||||
return dev ? to_coresight_device(dev) : NULL;
|
||||
}
|
||||
|
||||
/**
|
||||
* _coresight_build_path - recursively build a path from a @csdev to a sink.
|
||||
* @csdev: The device to start from.
|
||||
|
@ -380,6 +426,7 @@ struct coresight_device *coresight_get_sink(struct list_head *path)
|
|||
* last one.
|
||||
*/
|
||||
static int _coresight_build_path(struct coresight_device *csdev,
|
||||
struct coresight_device *sink,
|
||||
struct list_head *path)
|
||||
{
|
||||
int i;
|
||||
|
@ -387,15 +434,15 @@ static int _coresight_build_path(struct coresight_device *csdev,
|
|||
struct coresight_node *node;
|
||||
|
||||
/* An activated sink has been found. Enqueue the element */
|
||||
if ((csdev->type == CORESIGHT_DEV_TYPE_SINK ||
|
||||
csdev->type == CORESIGHT_DEV_TYPE_LINKSINK) && csdev->activated)
|
||||
if (csdev == sink)
|
||||
goto out;
|
||||
|
||||
/* Not a sink - recursively explore each port found on this element */
|
||||
for (i = 0; i < csdev->nr_outport; i++) {
|
||||
struct coresight_device *child_dev = csdev->conns[i].child_dev;
|
||||
|
||||
if (child_dev && _coresight_build_path(child_dev, path) == 0) {
|
||||
if (child_dev &&
|
||||
_coresight_build_path(child_dev, sink, path) == 0) {
|
||||
found = true;
|
||||
break;
|
||||
}
|
||||
|
@ -422,18 +469,22 @@ out:
|
|||
return 0;
|
||||
}
|
||||
|
||||
struct list_head *coresight_build_path(struct coresight_device *csdev)
|
||||
struct list_head *coresight_build_path(struct coresight_device *source,
|
||||
struct coresight_device *sink)
|
||||
{
|
||||
struct list_head *path;
|
||||
int rc;
|
||||
|
||||
if (!sink)
|
||||
return ERR_PTR(-EINVAL);
|
||||
|
||||
path = kzalloc(sizeof(struct list_head), GFP_KERNEL);
|
||||
if (!path)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
INIT_LIST_HEAD(path);
|
||||
|
||||
rc = _coresight_build_path(csdev, path);
|
||||
rc = _coresight_build_path(source, sink, path);
|
||||
if (rc) {
|
||||
kfree(path);
|
||||
return ERR_PTR(rc);
|
||||
|
@ -497,6 +548,7 @@ static int coresight_validate_source(struct coresight_device *csdev,
|
|||
int coresight_enable(struct coresight_device *csdev)
|
||||
{
|
||||
int cpu, ret = 0;
|
||||
struct coresight_device *sink;
|
||||
struct list_head *path;
|
||||
|
||||
mutex_lock(&coresight_mutex);
|
||||
|
@ -508,7 +560,17 @@ int coresight_enable(struct coresight_device *csdev)
|
|||
if (csdev->enable)
|
||||
goto out;
|
||||
|
||||
path = coresight_build_path(csdev);
|
||||
/*
|
||||
* Search for a valid sink for this session but don't reset the
|
||||
* "enable_sink" flag in sysFS. Users get to do that explicitly.
|
||||
*/
|
||||
sink = coresight_get_enabled_sink(false);
|
||||
if (!sink) {
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
path = coresight_build_path(csdev, sink);
|
||||
if (IS_ERR(path)) {
|
||||
pr_err("building path(s) failed\n");
|
||||
ret = PTR_ERR(path);
|
||||
|
|
|
@ -29,6 +29,9 @@
|
|||
#include "intel_th.h"
|
||||
#include "debug.h"
|
||||
|
||||
static bool host_mode __read_mostly;
|
||||
module_param(host_mode, bool, 0444);
|
||||
|
||||
static DEFINE_IDA(intel_th_ida);
|
||||
|
||||
static int intel_th_match(struct device *dev, struct device_driver *driver)
|
||||
|
@ -380,7 +383,7 @@ static void intel_th_device_free(struct intel_th_device *thdev)
|
|||
/*
|
||||
* Intel(R) Trace Hub subdevices
|
||||
*/
|
||||
static struct intel_th_subdevice {
|
||||
static const struct intel_th_subdevice {
|
||||
const char *name;
|
||||
struct resource res[3];
|
||||
unsigned nres;
|
||||
|
@ -527,14 +530,19 @@ static int intel_th_populate(struct intel_th *th, struct resource *devres,
|
|||
{
|
||||
struct resource res[3];
|
||||
unsigned int req = 0;
|
||||
int i, err;
|
||||
int src, dst, err;
|
||||
|
||||
/* create devices for each intel_th_subdevice */
|
||||
for (i = 0; i < ARRAY_SIZE(intel_th_subdevices); i++) {
|
||||
struct intel_th_subdevice *subdev = &intel_th_subdevices[i];
|
||||
for (src = 0, dst = 0; src < ARRAY_SIZE(intel_th_subdevices); src++) {
|
||||
const struct intel_th_subdevice *subdev =
|
||||
&intel_th_subdevices[src];
|
||||
struct intel_th_device *thdev;
|
||||
int r;
|
||||
|
||||
/* only allow SOURCE and SWITCH devices in host mode */
|
||||
if (host_mode && subdev->type == INTEL_TH_OUTPUT)
|
||||
continue;
|
||||
|
||||
thdev = intel_th_device_alloc(th, subdev->type, subdev->name,
|
||||
subdev->id);
|
||||
if (!thdev) {
|
||||
|
@ -577,10 +585,12 @@ static int intel_th_populate(struct intel_th *th, struct resource *devres,
|
|||
}
|
||||
|
||||
if (subdev->type == INTEL_TH_OUTPUT) {
|
||||
thdev->dev.devt = MKDEV(th->major, i);
|
||||
thdev->dev.devt = MKDEV(th->major, dst);
|
||||
thdev->output.type = subdev->otype;
|
||||
thdev->output.port = -1;
|
||||
thdev->output.scratchpad = subdev->scrpd;
|
||||
} else if (subdev->type == INTEL_TH_SWITCH) {
|
||||
thdev->host_mode = host_mode;
|
||||
}
|
||||
|
||||
err = device_add(&thdev->dev);
|
||||
|
@ -597,14 +607,14 @@ static int intel_th_populate(struct intel_th *th, struct resource *devres,
|
|||
req++;
|
||||
}
|
||||
|
||||
th->thdev[i] = thdev;
|
||||
th->thdev[dst++] = thdev;
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
||||
kill_subdevs:
|
||||
for (i-- ; i >= 0; i--)
|
||||
intel_th_device_remove(th->thdev[i]);
|
||||
for (; dst >= 0; dst--)
|
||||
intel_th_device_remove(th->thdev[dst]);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
@ -717,7 +727,7 @@ void intel_th_free(struct intel_th *th)
|
|||
|
||||
intel_th_request_hub_module_flush(th);
|
||||
for (i = 0; i < TH_SUBDEVICE_MAX; i++)
|
||||
if (th->thdev[i] != th->hub)
|
||||
if (th->thdev[i] && th->thdev[i] != th->hub)
|
||||
intel_th_device_remove(th->thdev[i]);
|
||||
|
||||
intel_th_device_remove(th->hub);
|
||||
|
|
|
@ -564,6 +564,9 @@ static int intel_th_gth_assign(struct intel_th_device *thdev,
|
|||
struct gth_device *gth = dev_get_drvdata(&thdev->dev);
|
||||
int i, id;
|
||||
|
||||
if (thdev->host_mode)
|
||||
return -EBUSY;
|
||||
|
||||
if (othdev->type != INTEL_TH_OUTPUT)
|
||||
return -EINVAL;
|
||||
|
||||
|
@ -600,6 +603,9 @@ static void intel_th_gth_unassign(struct intel_th_device *thdev,
|
|||
struct gth_device *gth = dev_get_drvdata(&thdev->dev);
|
||||
int port = othdev->output.port;
|
||||
|
||||
if (thdev->host_mode)
|
||||
return;
|
||||
|
||||
spin_lock(>h->gth_lock);
|
||||
othdev->output.port = -1;
|
||||
othdev->output.active = false;
|
||||
|
@ -654,9 +660,24 @@ static int intel_th_gth_probe(struct intel_th_device *thdev)
|
|||
gth->base = base;
|
||||
spin_lock_init(>h->gth_lock);
|
||||
|
||||
/*
|
||||
* Host mode can be signalled via SW means or via SCRPD_DEBUGGER_IN_USE
|
||||
* bit. Either way, don't reset HW in this case, and don't export any
|
||||
* capture configuration attributes. Also, refuse to assign output
|
||||
* drivers to ports, see intel_th_gth_assign().
|
||||
*/
|
||||
if (thdev->host_mode)
|
||||
goto done;
|
||||
|
||||
ret = intel_th_gth_reset(gth);
|
||||
if (ret)
|
||||
return ret;
|
||||
if (ret) {
|
||||
if (ret != -EBUSY)
|
||||
return ret;
|
||||
|
||||
thdev->host_mode = true;
|
||||
|
||||
goto done;
|
||||
}
|
||||
|
||||
for (i = 0; i < TH_CONFIGURABLE_MASTERS + 1; i++)
|
||||
gth->master[i] = -1;
|
||||
|
@ -677,6 +698,7 @@ static int intel_th_gth_probe(struct intel_th_device *thdev)
|
|||
return -ENOMEM;
|
||||
}
|
||||
|
||||
done:
|
||||
dev_set_drvdata(dev, gth);
|
||||
|
||||
return 0;
|
||||
|
|
|
@ -54,6 +54,7 @@ struct intel_th_output {
|
|||
* @num_resources: number of resources in @resource array
|
||||
* @type: INTEL_TH_{SOURCE,OUTPUT,SWITCH}
|
||||
* @id: device instance or -1
|
||||
* @host_mode: Intel TH is controlled by an external debug host
|
||||
* @output: output descriptor for INTEL_TH_OUTPUT devices
|
||||
* @name: device name to match the driver
|
||||
*/
|
||||
|
@ -64,6 +65,9 @@ struct intel_th_device {
|
|||
unsigned int type;
|
||||
int id;
|
||||
|
||||
/* INTEL_TH_SWITCH specific */
|
||||
bool host_mode;
|
||||
|
||||
/* INTEL_TH_OUTPUT specific */
|
||||
struct intel_th_output output;
|
||||
|
||||
|
|
|
@ -361,7 +361,7 @@ static int stm_char_open(struct inode *inode, struct file *file)
|
|||
struct stm_file *stmf;
|
||||
struct device *dev;
|
||||
unsigned int major = imajor(inode);
|
||||
int err = -ENODEV;
|
||||
int err = -ENOMEM;
|
||||
|
||||
dev = class_find_device(&stm_class, NULL, &major, major_match);
|
||||
if (!dev)
|
||||
|
@ -369,8 +369,9 @@ static int stm_char_open(struct inode *inode, struct file *file)
|
|||
|
||||
stmf = kzalloc(sizeof(*stmf), GFP_KERNEL);
|
||||
if (!stmf)
|
||||
return -ENOMEM;
|
||||
goto err_put_device;
|
||||
|
||||
err = -ENODEV;
|
||||
stm_output_init(&stmf->output);
|
||||
stmf->stm = to_stm_device(dev);
|
||||
|
||||
|
@ -382,9 +383,10 @@ static int stm_char_open(struct inode *inode, struct file *file)
|
|||
return nonseekable_open(inode, file);
|
||||
|
||||
err_free:
|
||||
kfree(stmf);
|
||||
err_put_device:
|
||||
/* matches class_find_device() above */
|
||||
put_device(dev);
|
||||
kfree(stmf);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
|
|
@ -22,7 +22,7 @@
|
|||
#include <linux/types.h>
|
||||
#include <linux/sem.h>
|
||||
#include <linux/bitmap.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/moduleparam.h>
|
||||
#include <linux/miscdevice.h>
|
||||
#include <linux/lightnvm.h>
|
||||
#include <linux/sched/sysctl.h>
|
||||
|
@ -1129,10 +1129,4 @@ static struct miscdevice _nvm_misc = {
|
|||
.nodename = "lightnvm/control",
|
||||
.fops = &_ctl_fops,
|
||||
};
|
||||
module_misc_device(_nvm_misc);
|
||||
|
||||
MODULE_ALIAS_MISCDEV(MISC_DYNAMIC_MINOR);
|
||||
|
||||
MODULE_AUTHOR("Matias Bjorling <m@bjorling.me>");
|
||||
MODULE_LICENSE("GPL v2");
|
||||
MODULE_VERSION("0.1");
|
||||
builtin_misc_device(_nvm_misc);
|
||||
|
|
|
@ -149,7 +149,7 @@ static int chameleon_get_bar(char __iomem **base, phys_addr_t mapbase,
|
|||
reg = readl(*base);
|
||||
|
||||
bar_count = BAR_CNT(reg);
|
||||
if (bar_count <= 0 && bar_count > CHAMELEON_BAR_MAX)
|
||||
if (bar_count <= 0 || bar_count > CHAMELEON_BAR_MAX)
|
||||
return -ENODEV;
|
||||
|
||||
c = kcalloc(bar_count, sizeof(struct chameleon_bar),
|
||||
|
|
|
@ -41,7 +41,6 @@
|
|||
#include "genwqe_driver.h"
|
||||
|
||||
#define GENWQE_MSI_IRQS 4 /* Just one supported, no MSIx */
|
||||
#define GENWQE_FLAG_MSI_ENABLED (1 << 0)
|
||||
|
||||
#define GENWQE_MAX_VFS 15 /* maximum 15 VFs are possible */
|
||||
#define GENWQE_MAX_FUNCS 16 /* 1 PF and 15 VFs */
|
||||
|
|
|
@ -740,13 +740,10 @@ int genwqe_read_softreset(struct genwqe_dev *cd)
|
|||
int genwqe_set_interrupt_capability(struct genwqe_dev *cd, int count)
|
||||
{
|
||||
int rc;
|
||||
struct pci_dev *pci_dev = cd->pci_dev;
|
||||
|
||||
rc = pci_enable_msi_range(pci_dev, 1, count);
|
||||
rc = pci_alloc_irq_vectors(cd->pci_dev, 1, count, PCI_IRQ_MSI);
|
||||
if (rc < 0)
|
||||
return rc;
|
||||
|
||||
cd->flags |= GENWQE_FLAG_MSI_ENABLED;
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -756,12 +753,7 @@ int genwqe_set_interrupt_capability(struct genwqe_dev *cd, int count)
|
|||
*/
|
||||
void genwqe_reset_interrupt_capability(struct genwqe_dev *cd)
|
||||
{
|
||||
struct pci_dev *pci_dev = cd->pci_dev;
|
||||
|
||||
if (cd->flags & GENWQE_FLAG_MSI_ENABLED) {
|
||||
pci_disable_msi(pci_dev);
|
||||
cd->flags &= ~GENWQE_FLAG_MSI_ENABLED;
|
||||
}
|
||||
pci_free_irq_vectors(cd->pci_dev);
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
@ -85,7 +85,8 @@ noinline void lkdtm_CORRUPT_STACK(void)
|
|||
/* Use default char array length that triggers stack protection. */
|
||||
char data[8];
|
||||
|
||||
memset((void *)data, 0, 64);
|
||||
memset((void *)data, 'a', 64);
|
||||
pr_info("Corrupted stack with '%16s'...\n", data);
|
||||
}
|
||||
|
||||
void lkdtm_UNALIGNED_LOAD_STORE_WRITE(void)
|
||||
|
|
|
@ -60,15 +60,18 @@ static noinline void execute_location(void *dst, bool write)
|
|||
|
||||
static void execute_user_location(void *dst)
|
||||
{
|
||||
int copied;
|
||||
|
||||
/* Intentionally crossing kernel/user memory boundary. */
|
||||
void (*func)(void) = dst;
|
||||
|
||||
pr_info("attempting ok execution at %p\n", do_nothing);
|
||||
do_nothing();
|
||||
|
||||
if (copy_to_user((void __user *)dst, do_nothing, EXEC_SIZE))
|
||||
copied = access_process_vm(current, (unsigned long)dst, do_nothing,
|
||||
EXEC_SIZE, FOLL_WRITE);
|
||||
if (copied < EXEC_SIZE)
|
||||
return;
|
||||
flush_icache_range((unsigned long)dst, (unsigned long)dst + EXEC_SIZE);
|
||||
pr_info("attempting bad execution at %p\n", func);
|
||||
func();
|
||||
}
|
||||
|
|
|
@ -144,7 +144,7 @@ int mei_amthif_run_next_cmd(struct mei_device *dev)
|
|||
dev->iamthif_state = MEI_IAMTHIF_WRITING;
|
||||
cl->fp = cb->fp;
|
||||
|
||||
ret = mei_cl_write(cl, cb, false);
|
||||
ret = mei_cl_write(cl, cb);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
|
|
|
@ -38,6 +38,9 @@ static const uuid_le mei_nfc_info_guid = MEI_UUID_NFC_INFO;
|
|||
#define MEI_UUID_WD UUID_LE(0x05B79A6F, 0x4628, 0x4D7F, \
|
||||
0x89, 0x9D, 0xA9, 0x15, 0x14, 0xCB, 0x32, 0xAB)
|
||||
|
||||
#define MEI_UUID_MKHIF_FIX UUID_LE(0x55213584, 0x9a29, 0x4916, \
|
||||
0xba, 0xdf, 0xf, 0xb7, 0xed, 0x68, 0x2a, 0xeb)
|
||||
|
||||
#define MEI_UUID_ANY NULL_UUID_LE
|
||||
|
||||
/**
|
||||
|
@ -69,6 +72,97 @@ static void blacklist(struct mei_cl_device *cldev)
|
|||
cldev->do_match = 0;
|
||||
}
|
||||
|
||||
#define OSTYPE_LINUX 2
|
||||
struct mei_os_ver {
|
||||
__le16 build;
|
||||
__le16 reserved1;
|
||||
u8 os_type;
|
||||
u8 major;
|
||||
u8 minor;
|
||||
u8 reserved2;
|
||||
} __packed;
|
||||
|
||||
#define MKHI_FEATURE_PTT 0x10
|
||||
|
||||
struct mkhi_rule_id {
|
||||
__le16 rule_type;
|
||||
u8 feature_id;
|
||||
u8 reserved;
|
||||
} __packed;
|
||||
|
||||
struct mkhi_fwcaps {
|
||||
struct mkhi_rule_id id;
|
||||
u8 len;
|
||||
u8 data[0];
|
||||
} __packed;
|
||||
|
||||
#define MKHI_FWCAPS_GROUP_ID 0x3
|
||||
#define MKHI_FWCAPS_SET_OS_VER_APP_RULE_CMD 6
|
||||
struct mkhi_msg_hdr {
|
||||
u8 group_id;
|
||||
u8 command;
|
||||
u8 reserved;
|
||||
u8 result;
|
||||
} __packed;
|
||||
|
||||
struct mkhi_msg {
|
||||
struct mkhi_msg_hdr hdr;
|
||||
u8 data[0];
|
||||
} __packed;
|
||||
|
||||
static int mei_osver(struct mei_cl_device *cldev)
|
||||
{
|
||||
int ret;
|
||||
const size_t size = sizeof(struct mkhi_msg_hdr) +
|
||||
sizeof(struct mkhi_fwcaps) +
|
||||
sizeof(struct mei_os_ver);
|
||||
size_t length = 8;
|
||||
char buf[size];
|
||||
struct mkhi_msg *req;
|
||||
struct mkhi_fwcaps *fwcaps;
|
||||
struct mei_os_ver *os_ver;
|
||||
unsigned int mode = MEI_CL_IO_TX_BLOCKING | MEI_CL_IO_TX_INTERNAL;
|
||||
|
||||
memset(buf, 0, size);
|
||||
|
||||
req = (struct mkhi_msg *)buf;
|
||||
req->hdr.group_id = MKHI_FWCAPS_GROUP_ID;
|
||||
req->hdr.command = MKHI_FWCAPS_SET_OS_VER_APP_RULE_CMD;
|
||||
|
||||
fwcaps = (struct mkhi_fwcaps *)req->data;
|
||||
|
||||
fwcaps->id.rule_type = 0x0;
|
||||
fwcaps->id.feature_id = MKHI_FEATURE_PTT;
|
||||
fwcaps->len = sizeof(*os_ver);
|
||||
os_ver = (struct mei_os_ver *)fwcaps->data;
|
||||
os_ver->os_type = OSTYPE_LINUX;
|
||||
|
||||
ret = __mei_cl_send(cldev->cl, buf, size, mode);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
ret = __mei_cl_recv(cldev->cl, buf, length, 0);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void mei_mkhi_fix(struct mei_cl_device *cldev)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = mei_cldev_enable(cldev);
|
||||
if (ret)
|
||||
return;
|
||||
|
||||
ret = mei_osver(cldev);
|
||||
if (ret)
|
||||
dev_err(&cldev->dev, "OS version command failed %d\n", ret);
|
||||
|
||||
mei_cldev_disable(cldev);
|
||||
}
|
||||
|
||||
/**
|
||||
* mei_wd - wd client on the bus, change protocol version
|
||||
* as the API has changed.
|
||||
|
@ -162,7 +256,8 @@ static int mei_nfc_if_version(struct mei_cl *cl,
|
|||
|
||||
WARN_ON(mutex_is_locked(&bus->device_lock));
|
||||
|
||||
ret = __mei_cl_send(cl, (u8 *)&cmd, sizeof(struct mei_nfc_cmd), 1);
|
||||
ret = __mei_cl_send(cl, (u8 *)&cmd, sizeof(struct mei_nfc_cmd),
|
||||
MEI_CL_IO_TX_BLOCKING);
|
||||
if (ret < 0) {
|
||||
dev_err(bus->dev, "Could not send IF version cmd\n");
|
||||
return ret;
|
||||
|
@ -177,7 +272,7 @@ static int mei_nfc_if_version(struct mei_cl *cl,
|
|||
return -ENOMEM;
|
||||
|
||||
ret = 0;
|
||||
bytes_recv = __mei_cl_recv(cl, (u8 *)reply, if_version_length);
|
||||
bytes_recv = __mei_cl_recv(cl, (u8 *)reply, if_version_length, 0);
|
||||
if (bytes_recv < if_version_length) {
|
||||
dev_err(bus->dev, "Could not read IF version\n");
|
||||
ret = -EIO;
|
||||
|
@ -309,6 +404,7 @@ static struct mei_fixup {
|
|||
MEI_FIXUP(MEI_UUID_NFC_INFO, blacklist),
|
||||
MEI_FIXUP(MEI_UUID_NFC_HCI, mei_nfc),
|
||||
MEI_FIXUP(MEI_UUID_WD, mei_wd),
|
||||
MEI_FIXUP(MEI_UUID_MKHIF_FIX, mei_mkhi_fix),
|
||||
};
|
||||
|
||||
/**
|
||||
|
|
|
@ -36,12 +36,12 @@
|
|||
* @cl: host client
|
||||
* @buf: buffer to send
|
||||
* @length: buffer length
|
||||
* @blocking: wait for write completion
|
||||
* @mode: sending mode
|
||||
*
|
||||
* Return: written size bytes or < 0 on error
|
||||
*/
|
||||
ssize_t __mei_cl_send(struct mei_cl *cl, u8 *buf, size_t length,
|
||||
bool blocking)
|
||||
unsigned int mode)
|
||||
{
|
||||
struct mei_device *bus;
|
||||
struct mei_cl_cb *cb;
|
||||
|
@ -80,9 +80,11 @@ ssize_t __mei_cl_send(struct mei_cl *cl, u8 *buf, size_t length,
|
|||
goto out;
|
||||
}
|
||||
|
||||
cb->internal = !!(mode & MEI_CL_IO_TX_INTERNAL);
|
||||
cb->blocking = !!(mode & MEI_CL_IO_TX_BLOCKING);
|
||||
memcpy(cb->buf.data, buf, length);
|
||||
|
||||
rets = mei_cl_write(cl, cb, blocking);
|
||||
rets = mei_cl_write(cl, cb);
|
||||
|
||||
out:
|
||||
mutex_unlock(&bus->device_lock);
|
||||
|
@ -96,15 +98,18 @@ out:
|
|||
* @cl: host client
|
||||
* @buf: buffer to receive
|
||||
* @length: buffer length
|
||||
* @mode: io mode
|
||||
*
|
||||
* Return: read size in bytes of < 0 on error
|
||||
*/
|
||||
ssize_t __mei_cl_recv(struct mei_cl *cl, u8 *buf, size_t length)
|
||||
ssize_t __mei_cl_recv(struct mei_cl *cl, u8 *buf, size_t length,
|
||||
unsigned int mode)
|
||||
{
|
||||
struct mei_device *bus;
|
||||
struct mei_cl_cb *cb;
|
||||
size_t r_length;
|
||||
ssize_t rets;
|
||||
bool nonblock = !!(mode & MEI_CL_IO_RX_NONBLOCK);
|
||||
|
||||
if (WARN_ON(!cl || !cl->dev))
|
||||
return -ENODEV;
|
||||
|
@ -125,6 +130,11 @@ ssize_t __mei_cl_recv(struct mei_cl *cl, u8 *buf, size_t length)
|
|||
if (rets && rets != -EBUSY)
|
||||
goto out;
|
||||
|
||||
if (nonblock) {
|
||||
rets = -EAGAIN;
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* wait on event only if there is no other waiter */
|
||||
/* synchronized under device mutex */
|
||||
if (!waitqueue_active(&cl->rx_wait)) {
|
||||
|
@ -185,13 +195,29 @@ ssize_t mei_cldev_send(struct mei_cl_device *cldev, u8 *buf, size_t length)
|
|||
{
|
||||
struct mei_cl *cl = cldev->cl;
|
||||
|
||||
if (cl == NULL)
|
||||
return -ENODEV;
|
||||
|
||||
return __mei_cl_send(cl, buf, length, 1);
|
||||
return __mei_cl_send(cl, buf, length, MEI_CL_IO_TX_BLOCKING);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(mei_cldev_send);
|
||||
|
||||
/**
|
||||
* mei_cldev_recv_nonblock - non block client receive (read)
|
||||
*
|
||||
* @cldev: me client device
|
||||
* @buf: buffer to receive
|
||||
* @length: buffer length
|
||||
*
|
||||
* Return: read size in bytes of < 0 on error
|
||||
* -EAGAIN if function will block.
|
||||
*/
|
||||
ssize_t mei_cldev_recv_nonblock(struct mei_cl_device *cldev, u8 *buf,
|
||||
size_t length)
|
||||
{
|
||||
struct mei_cl *cl = cldev->cl;
|
||||
|
||||
return __mei_cl_recv(cl, buf, length, MEI_CL_IO_RX_NONBLOCK);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(mei_cldev_recv_nonblock);
|
||||
|
||||
/**
|
||||
* mei_cldev_recv - client receive (read)
|
||||
*
|
||||
|
@ -205,39 +231,45 @@ ssize_t mei_cldev_recv(struct mei_cl_device *cldev, u8 *buf, size_t length)
|
|||
{
|
||||
struct mei_cl *cl = cldev->cl;
|
||||
|
||||
if (cl == NULL)
|
||||
return -ENODEV;
|
||||
|
||||
return __mei_cl_recv(cl, buf, length);
|
||||
return __mei_cl_recv(cl, buf, length, 0);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(mei_cldev_recv);
|
||||
|
||||
/**
|
||||
* mei_cl_bus_event_work - dispatch rx event for a bus device
|
||||
* and schedule new work
|
||||
* mei_cl_bus_rx_work - dispatch rx event for a bus device
|
||||
*
|
||||
* @work: work
|
||||
*/
|
||||
static void mei_cl_bus_event_work(struct work_struct *work)
|
||||
static void mei_cl_bus_rx_work(struct work_struct *work)
|
||||
{
|
||||
struct mei_cl_device *cldev;
|
||||
struct mei_device *bus;
|
||||
|
||||
cldev = container_of(work, struct mei_cl_device, event_work);
|
||||
cldev = container_of(work, struct mei_cl_device, rx_work);
|
||||
|
||||
bus = cldev->bus;
|
||||
|
||||
if (cldev->event_cb)
|
||||
cldev->event_cb(cldev, cldev->events, cldev->event_context);
|
||||
if (cldev->rx_cb)
|
||||
cldev->rx_cb(cldev);
|
||||
|
||||
cldev->events = 0;
|
||||
mutex_lock(&bus->device_lock);
|
||||
mei_cl_read_start(cldev->cl, mei_cl_mtu(cldev->cl), NULL);
|
||||
mutex_unlock(&bus->device_lock);
|
||||
}
|
||||
|
||||
/* Prepare for the next read */
|
||||
if (cldev->events_mask & BIT(MEI_CL_EVENT_RX)) {
|
||||
mutex_lock(&bus->device_lock);
|
||||
mei_cl_read_start(cldev->cl, mei_cl_mtu(cldev->cl), NULL);
|
||||
mutex_unlock(&bus->device_lock);
|
||||
}
|
||||
/**
|
||||
* mei_cl_bus_notif_work - dispatch FW notif event for a bus device
|
||||
*
|
||||
* @work: work
|
||||
*/
|
||||
static void mei_cl_bus_notif_work(struct work_struct *work)
|
||||
{
|
||||
struct mei_cl_device *cldev;
|
||||
|
||||
cldev = container_of(work, struct mei_cl_device, notif_work);
|
||||
|
||||
if (cldev->notif_cb)
|
||||
cldev->notif_cb(cldev);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -252,18 +284,13 @@ bool mei_cl_bus_notify_event(struct mei_cl *cl)
|
|||
{
|
||||
struct mei_cl_device *cldev = cl->cldev;
|
||||
|
||||
if (!cldev || !cldev->event_cb)
|
||||
return false;
|
||||
|
||||
if (!(cldev->events_mask & BIT(MEI_CL_EVENT_NOTIF)))
|
||||
if (!cldev || !cldev->notif_cb)
|
||||
return false;
|
||||
|
||||
if (!cl->notify_ev)
|
||||
return false;
|
||||
|
||||
set_bit(MEI_CL_EVENT_NOTIF, &cldev->events);
|
||||
|
||||
schedule_work(&cldev->event_work);
|
||||
schedule_work(&cldev->notif_work);
|
||||
|
||||
cl->notify_ev = false;
|
||||
|
||||
|
@ -271,7 +298,7 @@ bool mei_cl_bus_notify_event(struct mei_cl *cl)
|
|||
}
|
||||
|
||||
/**
|
||||
* mei_cl_bus_rx_event - schedule rx event
|
||||
* mei_cl_bus_rx_event - schedule rx event
|
||||
*
|
||||
* @cl: host client
|
||||
*
|
||||
|
@ -282,66 +309,81 @@ bool mei_cl_bus_rx_event(struct mei_cl *cl)
|
|||
{
|
||||
struct mei_cl_device *cldev = cl->cldev;
|
||||
|
||||
if (!cldev || !cldev->event_cb)
|
||||
if (!cldev || !cldev->rx_cb)
|
||||
return false;
|
||||
|
||||
if (!(cldev->events_mask & BIT(MEI_CL_EVENT_RX)))
|
||||
return false;
|
||||
|
||||
set_bit(MEI_CL_EVENT_RX, &cldev->events);
|
||||
|
||||
schedule_work(&cldev->event_work);
|
||||
schedule_work(&cldev->rx_work);
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* mei_cldev_register_event_cb - register event callback
|
||||
* mei_cldev_register_rx_cb - register Rx event callback
|
||||
*
|
||||
* @cldev: me client devices
|
||||
* @event_cb: callback function
|
||||
* @events_mask: requested events bitmask
|
||||
* @context: driver context data
|
||||
* @rx_cb: callback function
|
||||
*
|
||||
* Return: 0 on success
|
||||
* -EALREADY if an callback is already registered
|
||||
* <0 on other errors
|
||||
*/
|
||||
int mei_cldev_register_event_cb(struct mei_cl_device *cldev,
|
||||
unsigned long events_mask,
|
||||
mei_cldev_event_cb_t event_cb, void *context)
|
||||
int mei_cldev_register_rx_cb(struct mei_cl_device *cldev, mei_cldev_cb_t rx_cb)
|
||||
{
|
||||
struct mei_device *bus = cldev->bus;
|
||||
int ret;
|
||||
|
||||
if (cldev->event_cb)
|
||||
if (!rx_cb)
|
||||
return -EINVAL;
|
||||
if (cldev->rx_cb)
|
||||
return -EALREADY;
|
||||
|
||||
cldev->events = 0;
|
||||
cldev->events_mask = events_mask;
|
||||
cldev->event_cb = event_cb;
|
||||
cldev->event_context = context;
|
||||
INIT_WORK(&cldev->event_work, mei_cl_bus_event_work);
|
||||
cldev->rx_cb = rx_cb;
|
||||
INIT_WORK(&cldev->rx_work, mei_cl_bus_rx_work);
|
||||
|
||||
if (cldev->events_mask & BIT(MEI_CL_EVENT_RX)) {
|
||||
mutex_lock(&bus->device_lock);
|
||||
ret = mei_cl_read_start(cldev->cl, mei_cl_mtu(cldev->cl), NULL);
|
||||
mutex_unlock(&bus->device_lock);
|
||||
if (ret && ret != -EBUSY)
|
||||
return ret;
|
||||
}
|
||||
|
||||
if (cldev->events_mask & BIT(MEI_CL_EVENT_NOTIF)) {
|
||||
mutex_lock(&bus->device_lock);
|
||||
ret = mei_cl_notify_request(cldev->cl, NULL, event_cb ? 1 : 0);
|
||||
mutex_unlock(&bus->device_lock);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
mutex_lock(&bus->device_lock);
|
||||
ret = mei_cl_read_start(cldev->cl, mei_cl_mtu(cldev->cl), NULL);
|
||||
mutex_unlock(&bus->device_lock);
|
||||
if (ret && ret != -EBUSY)
|
||||
return ret;
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(mei_cldev_register_event_cb);
|
||||
EXPORT_SYMBOL_GPL(mei_cldev_register_rx_cb);
|
||||
|
||||
/**
|
||||
* mei_cldev_register_notif_cb - register FW notification event callback
|
||||
*
|
||||
* @cldev: me client devices
|
||||
* @notif_cb: callback function
|
||||
*
|
||||
* Return: 0 on success
|
||||
* -EALREADY if an callback is already registered
|
||||
* <0 on other errors
|
||||
*/
|
||||
int mei_cldev_register_notif_cb(struct mei_cl_device *cldev,
|
||||
mei_cldev_cb_t notif_cb)
|
||||
{
|
||||
struct mei_device *bus = cldev->bus;
|
||||
int ret;
|
||||
|
||||
if (!notif_cb)
|
||||
return -EINVAL;
|
||||
|
||||
if (cldev->notif_cb)
|
||||
return -EALREADY;
|
||||
|
||||
cldev->notif_cb = notif_cb;
|
||||
INIT_WORK(&cldev->notif_work, mei_cl_bus_notif_work);
|
||||
|
||||
mutex_lock(&bus->device_lock);
|
||||
ret = mei_cl_notify_request(cldev->cl, NULL, 1);
|
||||
mutex_unlock(&bus->device_lock);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(mei_cldev_register_notif_cb);
|
||||
|
||||
/**
|
||||
* mei_cldev_get_drvdata - driver data getter
|
||||
|
@ -403,7 +445,7 @@ EXPORT_SYMBOL_GPL(mei_cldev_ver);
|
|||
*/
|
||||
bool mei_cldev_enabled(struct mei_cl_device *cldev)
|
||||
{
|
||||
return cldev->cl && mei_cl_is_connected(cldev->cl);
|
||||
return mei_cl_is_connected(cldev->cl);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(mei_cldev_enabled);
|
||||
|
||||
|
@ -423,14 +465,13 @@ int mei_cldev_enable(struct mei_cl_device *cldev)
|
|||
|
||||
cl = cldev->cl;
|
||||
|
||||
if (!cl) {
|
||||
if (cl->state == MEI_FILE_UNINITIALIZED) {
|
||||
mutex_lock(&bus->device_lock);
|
||||
cl = mei_cl_alloc_linked(bus);
|
||||
ret = mei_cl_link(cl);
|
||||
mutex_unlock(&bus->device_lock);
|
||||
if (IS_ERR(cl))
|
||||
return PTR_ERR(cl);
|
||||
if (ret)
|
||||
return ret;
|
||||
/* update pointers */
|
||||
cldev->cl = cl;
|
||||
cl->cldev = cldev;
|
||||
}
|
||||
|
||||
|
@ -471,19 +512,17 @@ int mei_cldev_disable(struct mei_cl_device *cldev)
|
|||
struct mei_cl *cl;
|
||||
int err;
|
||||
|
||||
if (!cldev || !cldev->cl)
|
||||
if (!cldev)
|
||||
return -ENODEV;
|
||||
|
||||
cl = cldev->cl;
|
||||
|
||||
bus = cldev->bus;
|
||||
|
||||
cldev->event_cb = NULL;
|
||||
|
||||
mutex_lock(&bus->device_lock);
|
||||
|
||||
if (!mei_cl_is_connected(cl)) {
|
||||
dev_err(bus->dev, "Already disconnected");
|
||||
dev_dbg(bus->dev, "Already disconnected");
|
||||
err = 0;
|
||||
goto out;
|
||||
}
|
||||
|
@ -497,9 +536,6 @@ out:
|
|||
mei_cl_flush_queues(cl, NULL);
|
||||
mei_cl_unlink(cl);
|
||||
|
||||
kfree(cl);
|
||||
cldev->cl = NULL;
|
||||
|
||||
mutex_unlock(&bus->device_lock);
|
||||
return err;
|
||||
}
|
||||
|
@ -629,9 +665,13 @@ static int mei_cl_device_remove(struct device *dev)
|
|||
if (!cldev || !dev->driver)
|
||||
return 0;
|
||||
|
||||
if (cldev->event_cb) {
|
||||
cldev->event_cb = NULL;
|
||||
cancel_work_sync(&cldev->event_work);
|
||||
if (cldev->rx_cb) {
|
||||
cancel_work_sync(&cldev->rx_work);
|
||||
cldev->rx_cb = NULL;
|
||||
}
|
||||
if (cldev->notif_cb) {
|
||||
cancel_work_sync(&cldev->notif_work);
|
||||
cldev->notif_cb = NULL;
|
||||
}
|
||||
|
||||
cldrv = to_mei_cl_driver(dev->driver);
|
||||
|
@ -754,6 +794,7 @@ static void mei_cl_bus_dev_release(struct device *dev)
|
|||
|
||||
mei_me_cl_put(cldev->me_cl);
|
||||
mei_dev_bus_put(cldev->bus);
|
||||
kfree(cldev->cl);
|
||||
kfree(cldev);
|
||||
}
|
||||
|
||||
|
@ -786,17 +827,25 @@ static struct mei_cl_device *mei_cl_bus_dev_alloc(struct mei_device *bus,
|
|||
struct mei_me_client *me_cl)
|
||||
{
|
||||
struct mei_cl_device *cldev;
|
||||
struct mei_cl *cl;
|
||||
|
||||
cldev = kzalloc(sizeof(struct mei_cl_device), GFP_KERNEL);
|
||||
if (!cldev)
|
||||
return NULL;
|
||||
|
||||
cl = mei_cl_allocate(bus);
|
||||
if (!cl) {
|
||||
kfree(cldev);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
device_initialize(&cldev->dev);
|
||||
cldev->dev.parent = bus->dev;
|
||||
cldev->dev.bus = &mei_cl_bus_type;
|
||||
cldev->dev.type = &mei_cl_device_type;
|
||||
cldev->bus = mei_dev_bus_get(bus);
|
||||
cldev->me_cl = mei_me_cl_get(me_cl);
|
||||
cldev->cl = cl;
|
||||
mei_cl_bus_set_name(cldev);
|
||||
cldev->is_added = 0;
|
||||
INIT_LIST_HEAD(&cldev->bus_list);
|
||||
|
|
|
@ -425,7 +425,7 @@ static inline void mei_io_list_free(struct mei_cl_cb *list, struct mei_cl *cl)
|
|||
*
|
||||
* @cl: host client
|
||||
* @length: size of the buffer
|
||||
* @type: operation type
|
||||
* @fop_type: operation type
|
||||
* @fp: associated file pointer (might be NULL)
|
||||
*
|
||||
* Return: cb on success and NULL on failure
|
||||
|
@ -459,7 +459,7 @@ struct mei_cl_cb *mei_cl_alloc_cb(struct mei_cl *cl, size_t length,
|
|||
*
|
||||
* @cl: host client
|
||||
* @length: size of the buffer
|
||||
* @type: operation type
|
||||
* @fop_type: operation type
|
||||
* @fp: associated file pointer (might be NULL)
|
||||
*
|
||||
* Return: cb on success and NULL on failure
|
||||
|
@ -571,7 +571,7 @@ void mei_cl_init(struct mei_cl *cl, struct mei_device *dev)
|
|||
INIT_LIST_HEAD(&cl->rd_pending);
|
||||
INIT_LIST_HEAD(&cl->link);
|
||||
cl->writing_state = MEI_IDLE;
|
||||
cl->state = MEI_FILE_INITIALIZING;
|
||||
cl->state = MEI_FILE_UNINITIALIZED;
|
||||
cl->dev = dev;
|
||||
}
|
||||
|
||||
|
@ -672,7 +672,12 @@ int mei_cl_unlink(struct mei_cl *cl)
|
|||
|
||||
list_del_init(&cl->link);
|
||||
|
||||
cl->state = MEI_FILE_INITIALIZING;
|
||||
cl->state = MEI_FILE_UNINITIALIZED;
|
||||
cl->writing_state = MEI_IDLE;
|
||||
|
||||
WARN_ON(!list_empty(&cl->rd_completed) ||
|
||||
!list_empty(&cl->rd_pending) ||
|
||||
!list_empty(&cl->link));
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -686,7 +691,7 @@ void mei_host_client_init(struct mei_device *dev)
|
|||
|
||||
pm_runtime_mark_last_busy(dev->dev);
|
||||
dev_dbg(dev->dev, "rpm: autosuspend\n");
|
||||
pm_runtime_autosuspend(dev->dev);
|
||||
pm_request_autosuspend(dev->dev);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -756,7 +761,7 @@ void mei_cl_set_disconnected(struct mei_cl *cl)
|
|||
struct mei_device *dev = cl->dev;
|
||||
|
||||
if (cl->state == MEI_FILE_DISCONNECTED ||
|
||||
cl->state == MEI_FILE_INITIALIZING)
|
||||
cl->state <= MEI_FILE_INITIALIZING)
|
||||
return;
|
||||
|
||||
cl->state = MEI_FILE_DISCONNECTED;
|
||||
|
@ -1598,18 +1603,17 @@ int mei_cl_irq_write(struct mei_cl *cl, struct mei_cl_cb *cb,
|
|||
*
|
||||
* @cl: host client
|
||||
* @cb: write callback with filled data
|
||||
* @blocking: block until completed
|
||||
*
|
||||
* Return: number of bytes sent on success, <0 on failure.
|
||||
*/
|
||||
int mei_cl_write(struct mei_cl *cl, struct mei_cl_cb *cb, bool blocking)
|
||||
int mei_cl_write(struct mei_cl *cl, struct mei_cl_cb *cb)
|
||||
{
|
||||
struct mei_device *dev;
|
||||
struct mei_msg_data *buf;
|
||||
struct mei_msg_hdr mei_hdr;
|
||||
int size;
|
||||
int rets;
|
||||
|
||||
bool blocking;
|
||||
|
||||
if (WARN_ON(!cl || !cl->dev))
|
||||
return -ENODEV;
|
||||
|
@ -1621,6 +1625,7 @@ int mei_cl_write(struct mei_cl *cl, struct mei_cl_cb *cb, bool blocking)
|
|||
|
||||
buf = &cb->buf;
|
||||
size = buf->size;
|
||||
blocking = cb->blocking;
|
||||
|
||||
cl_dbg(dev, cl, "size=%d\n", size);
|
||||
|
||||
|
|
|
@ -219,7 +219,7 @@ int mei_cl_irq_connect(struct mei_cl *cl, struct mei_cl_cb *cb,
|
|||
int mei_cl_read_start(struct mei_cl *cl, size_t length, const struct file *fp);
|
||||
int mei_cl_irq_read_msg(struct mei_cl *cl, struct mei_msg_hdr *hdr,
|
||||
struct mei_cl_cb *cmpl_list);
|
||||
int mei_cl_write(struct mei_cl *cl, struct mei_cl_cb *cb, bool blocking);
|
||||
int mei_cl_write(struct mei_cl *cl, struct mei_cl_cb *cb);
|
||||
int mei_cl_irq_write(struct mei_cl *cl, struct mei_cl_cb *cb,
|
||||
struct mei_cl_cb *cmpl_list);
|
||||
|
||||
|
|
|
@ -122,6 +122,8 @@
|
|||
#define MEI_DEV_ID_SPT_H 0xA13A /* Sunrise Point H */
|
||||
#define MEI_DEV_ID_SPT_H_2 0xA13B /* Sunrise Point H 2 */
|
||||
|
||||
#define MEI_DEV_ID_LBG 0xA1BA /* Lewisburg (SPT) */
|
||||
|
||||
#define MEI_DEV_ID_BXT_M 0x1A9A /* Broxton M */
|
||||
#define MEI_DEV_ID_APL_I 0x5A9A /* Apollo Lake I */
|
||||
|
||||
|
|
|
@ -246,6 +246,36 @@ static inline enum mei_pg_state mei_me_pg_state(struct mei_device *dev)
|
|||
return hw->pg_state;
|
||||
}
|
||||
|
||||
static inline u32 me_intr_src(u32 hcsr)
|
||||
{
|
||||
return hcsr & H_CSR_IS_MASK;
|
||||
}
|
||||
|
||||
/**
|
||||
* me_intr_disable - disables mei device interrupts
|
||||
* using supplied hcsr register value.
|
||||
*
|
||||
* @dev: the device structure
|
||||
* @hcsr: supplied hcsr register value
|
||||
*/
|
||||
static inline void me_intr_disable(struct mei_device *dev, u32 hcsr)
|
||||
{
|
||||
hcsr &= ~H_CSR_IE_MASK;
|
||||
mei_hcsr_set(dev, hcsr);
|
||||
}
|
||||
|
||||
/**
|
||||
* mei_me_intr_clear - clear and stop interrupts
|
||||
*
|
||||
* @dev: the device structure
|
||||
* @hcsr: supplied hcsr register value
|
||||
*/
|
||||
static inline void me_intr_clear(struct mei_device *dev, u32 hcsr)
|
||||
{
|
||||
if (me_intr_src(hcsr))
|
||||
mei_hcsr_write(dev, hcsr);
|
||||
}
|
||||
|
||||
/**
|
||||
* mei_me_intr_clear - clear and stop interrupts
|
||||
*
|
||||
|
@ -255,8 +285,7 @@ static void mei_me_intr_clear(struct mei_device *dev)
|
|||
{
|
||||
u32 hcsr = mei_hcsr_read(dev);
|
||||
|
||||
if (hcsr & H_CSR_IS_MASK)
|
||||
mei_hcsr_write(dev, hcsr);
|
||||
me_intr_clear(dev, hcsr);
|
||||
}
|
||||
/**
|
||||
* mei_me_intr_enable - enables mei device interrupts
|
||||
|
@ -280,8 +309,19 @@ static void mei_me_intr_disable(struct mei_device *dev)
|
|||
{
|
||||
u32 hcsr = mei_hcsr_read(dev);
|
||||
|
||||
hcsr &= ~H_CSR_IE_MASK;
|
||||
mei_hcsr_set(dev, hcsr);
|
||||
me_intr_disable(dev, hcsr);
|
||||
}
|
||||
|
||||
/**
|
||||
* mei_me_synchronize_irq - wait for pending IRQ handlers
|
||||
*
|
||||
* @dev: the device structure
|
||||
*/
|
||||
static void mei_me_synchronize_irq(struct mei_device *dev)
|
||||
{
|
||||
struct pci_dev *pdev = to_pci_dev(dev->dev);
|
||||
|
||||
synchronize_irq(pdev->irq);
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -450,7 +490,7 @@ static size_t mei_me_hbuf_max_len(const struct mei_device *dev)
|
|||
|
||||
|
||||
/**
|
||||
* mei_me_write_message - writes a message to mei device.
|
||||
* mei_me_hbuf_write - writes a message to host hw buffer.
|
||||
*
|
||||
* @dev: the device structure
|
||||
* @header: mei HECI header of message
|
||||
|
@ -458,9 +498,9 @@ static size_t mei_me_hbuf_max_len(const struct mei_device *dev)
|
|||
*
|
||||
* Return: -EIO if write has failed
|
||||
*/
|
||||
static int mei_me_write_message(struct mei_device *dev,
|
||||
struct mei_msg_hdr *header,
|
||||
unsigned char *buf)
|
||||
static int mei_me_hbuf_write(struct mei_device *dev,
|
||||
struct mei_msg_hdr *header,
|
||||
const unsigned char *buf)
|
||||
{
|
||||
unsigned long rem;
|
||||
unsigned long length = header->length;
|
||||
|
@ -956,13 +996,14 @@ static void mei_me_pg_legacy_intr(struct mei_device *dev)
|
|||
* mei_me_d0i3_intr - perform d0i3 processing in interrupt thread handler
|
||||
*
|
||||
* @dev: the device structure
|
||||
* @intr_source: interrupt source
|
||||
*/
|
||||
static void mei_me_d0i3_intr(struct mei_device *dev)
|
||||
static void mei_me_d0i3_intr(struct mei_device *dev, u32 intr_source)
|
||||
{
|
||||
struct mei_me_hw *hw = to_me_hw(dev);
|
||||
|
||||
if (dev->pg_event == MEI_PG_EVENT_INTR_WAIT &&
|
||||
(hw->intr_source & H_D0I3C_IS)) {
|
||||
(intr_source & H_D0I3C_IS)) {
|
||||
dev->pg_event = MEI_PG_EVENT_INTR_RECEIVED;
|
||||
if (hw->pg_state == MEI_PG_ON) {
|
||||
hw->pg_state = MEI_PG_OFF;
|
||||
|
@ -981,7 +1022,7 @@ static void mei_me_d0i3_intr(struct mei_device *dev)
|
|||
wake_up(&dev->wait_pg);
|
||||
}
|
||||
|
||||
if (hw->pg_state == MEI_PG_ON && (hw->intr_source & H_IS)) {
|
||||
if (hw->pg_state == MEI_PG_ON && (intr_source & H_IS)) {
|
||||
/*
|
||||
* HW sent some data and we are in D0i3, so
|
||||
* we got here because of HW initiated exit from D0i3.
|
||||
|
@ -996,13 +1037,14 @@ static void mei_me_d0i3_intr(struct mei_device *dev)
|
|||
* mei_me_pg_intr - perform pg processing in interrupt thread handler
|
||||
*
|
||||
* @dev: the device structure
|
||||
* @intr_source: interrupt source
|
||||
*/
|
||||
static void mei_me_pg_intr(struct mei_device *dev)
|
||||
static void mei_me_pg_intr(struct mei_device *dev, u32 intr_source)
|
||||
{
|
||||
struct mei_me_hw *hw = to_me_hw(dev);
|
||||
|
||||
if (hw->d0i3_supported)
|
||||
mei_me_d0i3_intr(dev);
|
||||
mei_me_d0i3_intr(dev, intr_source);
|
||||
else
|
||||
mei_me_pg_legacy_intr(dev);
|
||||
}
|
||||
|
@ -1121,19 +1163,16 @@ static int mei_me_hw_reset(struct mei_device *dev, bool intr_enable)
|
|||
irqreturn_t mei_me_irq_quick_handler(int irq, void *dev_id)
|
||||
{
|
||||
struct mei_device *dev = (struct mei_device *)dev_id;
|
||||
struct mei_me_hw *hw = to_me_hw(dev);
|
||||
u32 hcsr;
|
||||
|
||||
hcsr = mei_hcsr_read(dev);
|
||||
if (!(hcsr & H_CSR_IS_MASK))
|
||||
if (!me_intr_src(hcsr))
|
||||
return IRQ_NONE;
|
||||
|
||||
hw->intr_source = hcsr & H_CSR_IS_MASK;
|
||||
dev_dbg(dev->dev, "interrupt source 0x%08X.\n", hw->intr_source);
|
||||
|
||||
/* clear H_IS and H_D0I3C_IS bits in H_CSR to clear the interrupts */
|
||||
mei_hcsr_write(dev, hcsr);
|
||||
dev_dbg(dev->dev, "interrupt source 0x%08X\n", me_intr_src(hcsr));
|
||||
|
||||
/* disable interrupts on device */
|
||||
me_intr_disable(dev, hcsr);
|
||||
return IRQ_WAKE_THREAD;
|
||||
}
|
||||
|
||||
|
@ -1152,11 +1191,16 @@ irqreturn_t mei_me_irq_thread_handler(int irq, void *dev_id)
|
|||
struct mei_device *dev = (struct mei_device *) dev_id;
|
||||
struct mei_cl_cb complete_list;
|
||||
s32 slots;
|
||||
u32 hcsr;
|
||||
int rets = 0;
|
||||
|
||||
dev_dbg(dev->dev, "function called after ISR to handle the interrupt processing.\n");
|
||||
/* initialize our complete list */
|
||||
mutex_lock(&dev->device_lock);
|
||||
|
||||
hcsr = mei_hcsr_read(dev);
|
||||
me_intr_clear(dev, hcsr);
|
||||
|
||||
mei_io_list_init(&complete_list);
|
||||
|
||||
/* check if ME wants a reset */
|
||||
|
@ -1166,7 +1210,7 @@ irqreturn_t mei_me_irq_thread_handler(int irq, void *dev_id)
|
|||
goto end;
|
||||
}
|
||||
|
||||
mei_me_pg_intr(dev);
|
||||
mei_me_pg_intr(dev, me_intr_src(hcsr));
|
||||
|
||||
/* check if we need to start the dev */
|
||||
if (!mei_host_is_ready(dev)) {
|
||||
|
@ -1216,6 +1260,7 @@ irqreturn_t mei_me_irq_thread_handler(int irq, void *dev_id)
|
|||
|
||||
end:
|
||||
dev_dbg(dev->dev, "interrupt thread end ret = %d\n", rets);
|
||||
mei_me_intr_enable(dev);
|
||||
mutex_unlock(&dev->device_lock);
|
||||
return IRQ_HANDLED;
|
||||
}
|
||||
|
@ -1238,12 +1283,13 @@ static const struct mei_hw_ops mei_me_hw_ops = {
|
|||
.intr_clear = mei_me_intr_clear,
|
||||
.intr_enable = mei_me_intr_enable,
|
||||
.intr_disable = mei_me_intr_disable,
|
||||
.synchronize_irq = mei_me_synchronize_irq,
|
||||
|
||||
.hbuf_free_slots = mei_me_hbuf_empty_slots,
|
||||
.hbuf_is_ready = mei_me_hbuf_is_empty,
|
||||
.hbuf_max_len = mei_me_hbuf_max_len,
|
||||
|
||||
.write = mei_me_write_message,
|
||||
.write = mei_me_hbuf_write,
|
||||
|
||||
.rdbuf_full_slots = mei_me_count_full_read_slots,
|
||||
.read_hdr = mei_me_mecbrw_read,
|
||||
|
|
|
@ -51,14 +51,12 @@ struct mei_cfg {
|
|||
*
|
||||
* @cfg: per device generation config and ops
|
||||
* @mem_addr: io memory address
|
||||
* @intr_source: interrupt source
|
||||
* @pg_state: power gating state
|
||||
* @d0i3_supported: di03 support
|
||||
*/
|
||||
struct mei_me_hw {
|
||||
const struct mei_cfg *cfg;
|
||||
void __iomem *mem_addr;
|
||||
u32 intr_source;
|
||||
enum mei_pg_state pg_state;
|
||||
bool d0i3_supported;
|
||||
};
|
||||
|
|
|
@ -19,7 +19,7 @@
|
|||
#include <linux/ktime.h>
|
||||
#include <linux/delay.h>
|
||||
#include <linux/kthread.h>
|
||||
#include <linux/irqreturn.h>
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/pm_runtime.h>
|
||||
|
||||
#include <linux/mei.h>
|
||||
|
@ -440,6 +440,18 @@ static void mei_txe_intr_enable(struct mei_device *dev)
|
|||
mei_txe_br_reg_write(hw, HIER_REG, HIER_INT_EN_MSK);
|
||||
}
|
||||
|
||||
/**
|
||||
* mei_txe_synchronize_irq - wait for pending IRQ handlers
|
||||
*
|
||||
* @dev: the device structure
|
||||
*/
|
||||
static void mei_txe_synchronize_irq(struct mei_device *dev)
|
||||
{
|
||||
struct pci_dev *pdev = to_pci_dev(dev->dev);
|
||||
|
||||
synchronize_irq(pdev->irq);
|
||||
}
|
||||
|
||||
/**
|
||||
* mei_txe_pending_interrupts - check if there are pending interrupts
|
||||
* only Aliveness, Input ready, and output doorbell are of relevance
|
||||
|
@ -691,7 +703,8 @@ static void mei_txe_hw_config(struct mei_device *dev)
|
|||
*/
|
||||
|
||||
static int mei_txe_write(struct mei_device *dev,
|
||||
struct mei_msg_hdr *header, unsigned char *buf)
|
||||
struct mei_msg_hdr *header,
|
||||
const unsigned char *buf)
|
||||
{
|
||||
struct mei_txe_hw *hw = to_txe_hw(dev);
|
||||
unsigned long rem;
|
||||
|
@ -1167,6 +1180,7 @@ static const struct mei_hw_ops mei_txe_hw_ops = {
|
|||
.intr_clear = mei_txe_intr_clear,
|
||||
.intr_enable = mei_txe_intr_enable,
|
||||
.intr_disable = mei_txe_intr_disable,
|
||||
.synchronize_irq = mei_txe_synchronize_irq,
|
||||
|
||||
.hbuf_free_slots = mei_txe_hbuf_empty_slots,
|
||||
.hbuf_is_ready = mei_txe_is_input_ready,
|
||||
|
|
|
@ -122,6 +122,10 @@ int mei_reset(struct mei_device *dev)
|
|||
mei_dev_state_str(state), fw_sts_str);
|
||||
}
|
||||
|
||||
mei_clear_interrupts(dev);
|
||||
|
||||
mei_synchronize_irq(dev);
|
||||
|
||||
/* we're already in reset, cancel the init timer
|
||||
* if the reset was called due the hbm protocol error
|
||||
* we need to call it before hw start
|
||||
|
@ -273,8 +277,6 @@ int mei_restart(struct mei_device *dev)
|
|||
|
||||
mutex_lock(&dev->device_lock);
|
||||
|
||||
mei_clear_interrupts(dev);
|
||||
|
||||
dev->dev_state = MEI_DEV_POWER_UP;
|
||||
dev->reset_count = 0;
|
||||
|
||||
|
|
|
@ -118,7 +118,6 @@ int mei_cl_irq_read_msg(struct mei_cl *cl,
|
|||
|
||||
if (!mei_cl_is_connected(cl)) {
|
||||
cl_dbg(dev, cl, "not connected\n");
|
||||
list_move_tail(&cb->list, &complete_list->list);
|
||||
cb->status = -ENODEV;
|
||||
goto discard;
|
||||
}
|
||||
|
@ -128,8 +127,6 @@ int mei_cl_irq_read_msg(struct mei_cl *cl,
|
|||
if (buf_sz < cb->buf_idx) {
|
||||
cl_err(dev, cl, "message is too big len %d idx %zu\n",
|
||||
mei_hdr->length, cb->buf_idx);
|
||||
|
||||
list_move_tail(&cb->list, &complete_list->list);
|
||||
cb->status = -EMSGSIZE;
|
||||
goto discard;
|
||||
}
|
||||
|
@ -137,8 +134,6 @@ int mei_cl_irq_read_msg(struct mei_cl *cl,
|
|||
if (cb->buf.size < buf_sz) {
|
||||
cl_dbg(dev, cl, "message overflow. size %zu len %d idx %zu\n",
|
||||
cb->buf.size, mei_hdr->length, cb->buf_idx);
|
||||
|
||||
list_move_tail(&cb->list, &complete_list->list);
|
||||
cb->status = -EMSGSIZE;
|
||||
goto discard;
|
||||
}
|
||||
|
@ -158,6 +153,8 @@ int mei_cl_irq_read_msg(struct mei_cl *cl,
|
|||
return 0;
|
||||
|
||||
discard:
|
||||
if (cb)
|
||||
list_move_tail(&cb->list, &complete_list->list);
|
||||
mei_irq_discard_msg(dev, mei_hdr);
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -322,7 +322,7 @@ static ssize_t mei_write(struct file *file, const char __user *ubuf,
|
|||
goto out;
|
||||
}
|
||||
|
||||
rets = mei_cl_write(cl, cb, false);
|
||||
rets = mei_cl_write(cl, cb);
|
||||
out:
|
||||
mutex_unlock(&dev->device_lock);
|
||||
return rets;
|
||||
|
@ -653,7 +653,7 @@ static int mei_fasync(int fd, struct file *file, int band)
|
|||
}
|
||||
|
||||
/**
|
||||
* fw_status_show - mei device attribute show method
|
||||
* fw_status_show - mei device fw_status attribute show method
|
||||
*
|
||||
* @device: device pointer
|
||||
* @attr: attribute pointer
|
||||
|
@ -684,8 +684,49 @@ static ssize_t fw_status_show(struct device *device,
|
|||
}
|
||||
static DEVICE_ATTR_RO(fw_status);
|
||||
|
||||
/**
|
||||
* hbm_ver_show - display HBM protocol version negotiated with FW
|
||||
*
|
||||
* @device: device pointer
|
||||
* @attr: attribute pointer
|
||||
* @buf: char out buffer
|
||||
*
|
||||
* Return: number of the bytes printed into buf or error
|
||||
*/
|
||||
static ssize_t hbm_ver_show(struct device *device,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct mei_device *dev = dev_get_drvdata(device);
|
||||
struct hbm_version ver;
|
||||
|
||||
mutex_lock(&dev->device_lock);
|
||||
ver = dev->version;
|
||||
mutex_unlock(&dev->device_lock);
|
||||
|
||||
return sprintf(buf, "%u.%u\n", ver.major_version, ver.minor_version);
|
||||
}
|
||||
static DEVICE_ATTR_RO(hbm_ver);
|
||||
|
||||
/**
|
||||
* hbm_ver_drv_show - display HBM protocol version advertised by driver
|
||||
*
|
||||
* @device: device pointer
|
||||
* @attr: attribute pointer
|
||||
* @buf: char out buffer
|
||||
*
|
||||
* Return: number of the bytes printed into buf or error
|
||||
*/
|
||||
static ssize_t hbm_ver_drv_show(struct device *device,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
return sprintf(buf, "%u.%u\n", HBM_MAJOR_VERSION, HBM_MINOR_VERSION);
|
||||
}
|
||||
static DEVICE_ATTR_RO(hbm_ver_drv);
|
||||
|
||||
static struct attribute *mei_attrs[] = {
|
||||
&dev_attr_fw_status.attr,
|
||||
&dev_attr_hbm_ver.attr,
|
||||
&dev_attr_hbm_ver_drv.attr,
|
||||
NULL
|
||||
};
|
||||
ATTRIBUTE_GROUPS(mei);
|
||||
|
|
|
@ -55,7 +55,8 @@ extern const uuid_le mei_amthif_guid;
|
|||
|
||||
/* File state */
|
||||
enum file_state {
|
||||
MEI_FILE_INITIALIZING = 0,
|
||||
MEI_FILE_UNINITIALIZED = 0,
|
||||
MEI_FILE_INITIALIZING,
|
||||
MEI_FILE_CONNECTING,
|
||||
MEI_FILE_CONNECTED,
|
||||
MEI_FILE_DISCONNECTING,
|
||||
|
@ -109,6 +110,21 @@ enum mei_cb_file_ops {
|
|||
MEI_FOP_NOTIFY_STOP,
|
||||
};
|
||||
|
||||
/**
|
||||
* enum mei_cl_io_mode - io mode between driver and fw
|
||||
*
|
||||
* @MEI_CL_IO_TX_BLOCKING: send is blocking
|
||||
* @MEI_CL_IO_TX_INTERNAL: internal communication between driver and FW
|
||||
*
|
||||
* @MEI_CL_IO_RX_NONBLOCK: recv is non-blocking
|
||||
*/
|
||||
enum mei_cl_io_mode {
|
||||
MEI_CL_IO_TX_BLOCKING = BIT(0),
|
||||
MEI_CL_IO_TX_INTERNAL = BIT(1),
|
||||
|
||||
MEI_CL_IO_RX_NONBLOCK = BIT(2),
|
||||
};
|
||||
|
||||
/*
|
||||
* Intel MEI message data struct
|
||||
*/
|
||||
|
@ -169,6 +185,7 @@ struct mei_cl;
|
|||
* @fp: pointer to file structure
|
||||
* @status: io status of the cb
|
||||
* @internal: communication between driver and FW flag
|
||||
* @blocking: transmission blocking mode
|
||||
* @completed: the transfer or reception has completed
|
||||
*/
|
||||
struct mei_cl_cb {
|
||||
|
@ -180,6 +197,7 @@ struct mei_cl_cb {
|
|||
const struct file *fp;
|
||||
int status;
|
||||
u32 internal:1;
|
||||
u32 blocking:1;
|
||||
u32 completed:1;
|
||||
};
|
||||
|
||||
|
@ -253,6 +271,7 @@ struct mei_cl {
|
|||
* @intr_clear : clear pending interrupts
|
||||
* @intr_enable : enable interrupts
|
||||
* @intr_disable : disable interrupts
|
||||
* @synchronize_irq : synchronize irqs
|
||||
*
|
||||
* @hbuf_free_slots : query for write buffer empty slots
|
||||
* @hbuf_is_ready : query if write buffer is empty
|
||||
|
@ -274,7 +293,6 @@ struct mei_hw_ops {
|
|||
int (*hw_start)(struct mei_device *dev);
|
||||
void (*hw_config)(struct mei_device *dev);
|
||||
|
||||
|
||||
int (*fw_status)(struct mei_device *dev, struct mei_fw_status *fw_sts);
|
||||
enum mei_pg_state (*pg_state)(struct mei_device *dev);
|
||||
bool (*pg_in_transition)(struct mei_device *dev);
|
||||
|
@ -283,14 +301,14 @@ struct mei_hw_ops {
|
|||
void (*intr_clear)(struct mei_device *dev);
|
||||
void (*intr_enable)(struct mei_device *dev);
|
||||
void (*intr_disable)(struct mei_device *dev);
|
||||
void (*synchronize_irq)(struct mei_device *dev);
|
||||
|
||||
int (*hbuf_free_slots)(struct mei_device *dev);
|
||||
bool (*hbuf_is_ready)(struct mei_device *dev);
|
||||
size_t (*hbuf_max_len)(const struct mei_device *dev);
|
||||
|
||||
int (*write)(struct mei_device *dev,
|
||||
struct mei_msg_hdr *hdr,
|
||||
unsigned char *buf);
|
||||
const unsigned char *buf);
|
||||
|
||||
int (*rdbuf_full_slots)(struct mei_device *dev);
|
||||
|
||||
|
@ -304,8 +322,9 @@ void mei_cl_bus_rescan(struct mei_device *bus);
|
|||
void mei_cl_bus_rescan_work(struct work_struct *work);
|
||||
void mei_cl_bus_dev_fixup(struct mei_cl_device *dev);
|
||||
ssize_t __mei_cl_send(struct mei_cl *cl, u8 *buf, size_t length,
|
||||
bool blocking);
|
||||
ssize_t __mei_cl_recv(struct mei_cl *cl, u8 *buf, size_t length);
|
||||
unsigned int mode);
|
||||
ssize_t __mei_cl_recv(struct mei_cl *cl, u8 *buf, size_t length,
|
||||
unsigned int mode);
|
||||
bool mei_cl_bus_rx_event(struct mei_cl *cl);
|
||||
bool mei_cl_bus_notify_event(struct mei_cl *cl);
|
||||
void mei_cl_bus_remove_devices(struct mei_device *bus);
|
||||
|
@ -627,6 +646,11 @@ static inline void mei_disable_interrupts(struct mei_device *dev)
|
|||
dev->ops->intr_disable(dev);
|
||||
}
|
||||
|
||||
static inline void mei_synchronize_irq(struct mei_device *dev)
|
||||
{
|
||||
dev->ops->synchronize_irq(dev);
|
||||
}
|
||||
|
||||
static inline bool mei_host_is_ready(struct mei_device *dev)
|
||||
{
|
||||
return dev->ops->host_is_ready(dev);
|
||||
|
@ -652,7 +676,7 @@ static inline size_t mei_hbuf_max_len(const struct mei_device *dev)
|
|||
}
|
||||
|
||||
static inline int mei_write_message(struct mei_device *dev,
|
||||
struct mei_msg_hdr *hdr, void *buf)
|
||||
struct mei_msg_hdr *hdr, const void *buf)
|
||||
{
|
||||
return dev->ops->write(dev, hdr, buf);
|
||||
}
|
||||
|
|
|
@ -87,6 +87,7 @@ static const struct pci_device_id mei_me_pci_tbl[] = {
|
|||
{MEI_PCI_DEVICE(MEI_DEV_ID_SPT_2, mei_me_pch8_cfg)},
|
||||
{MEI_PCI_DEVICE(MEI_DEV_ID_SPT_H, mei_me_pch8_sps_cfg)},
|
||||
{MEI_PCI_DEVICE(MEI_DEV_ID_SPT_H_2, mei_me_pch8_sps_cfg)},
|
||||
{MEI_PCI_DEVICE(MEI_DEV_ID_LBG, mei_me_pch8_cfg)},
|
||||
|
||||
{MEI_PCI_DEVICE(MEI_DEV_ID_BXT_M, mei_me_pch8_cfg)},
|
||||
{MEI_PCI_DEVICE(MEI_DEV_ID_APL_I, mei_me_pch8_cfg)},
|
||||
|
|
|
@ -297,35 +297,34 @@ static int mei_nfc_recv(struct nfc_mei_phy *phy, u8 *buf, size_t length)
|
|||
}
|
||||
|
||||
|
||||
static void nfc_mei_event_cb(struct mei_cl_device *cldev, u32 events,
|
||||
void *context)
|
||||
static void nfc_mei_rx_cb(struct mei_cl_device *cldev)
|
||||
{
|
||||
struct nfc_mei_phy *phy = context;
|
||||
struct nfc_mei_phy *phy = mei_cldev_get_drvdata(cldev);
|
||||
struct sk_buff *skb;
|
||||
int reply_size;
|
||||
|
||||
if (!phy)
|
||||
return;
|
||||
|
||||
if (phy->hard_fault != 0)
|
||||
return;
|
||||
|
||||
if (events & BIT(MEI_CL_EVENT_RX)) {
|
||||
struct sk_buff *skb;
|
||||
int reply_size;
|
||||
skb = alloc_skb(MEI_NFC_MAX_READ, GFP_KERNEL);
|
||||
if (!skb)
|
||||
return;
|
||||
|
||||
skb = alloc_skb(MEI_NFC_MAX_READ, GFP_KERNEL);
|
||||
if (!skb)
|
||||
return;
|
||||
|
||||
reply_size = mei_nfc_recv(phy, skb->data, MEI_NFC_MAX_READ);
|
||||
if (reply_size < MEI_NFC_HEADER_SIZE) {
|
||||
kfree_skb(skb);
|
||||
return;
|
||||
}
|
||||
|
||||
skb_put(skb, reply_size);
|
||||
skb_pull(skb, MEI_NFC_HEADER_SIZE);
|
||||
|
||||
MEI_DUMP_SKB_IN("mei frame read", skb);
|
||||
|
||||
nfc_hci_recv_frame(phy->hdev, skb);
|
||||
reply_size = mei_nfc_recv(phy, skb->data, MEI_NFC_MAX_READ);
|
||||
if (reply_size < MEI_NFC_HEADER_SIZE) {
|
||||
kfree_skb(skb);
|
||||
return;
|
||||
}
|
||||
|
||||
skb_put(skb, reply_size);
|
||||
skb_pull(skb, MEI_NFC_HEADER_SIZE);
|
||||
|
||||
MEI_DUMP_SKB_IN("mei frame read", skb);
|
||||
|
||||
nfc_hci_recv_frame(phy->hdev, skb);
|
||||
}
|
||||
|
||||
static int nfc_mei_phy_enable(void *phy_id)
|
||||
|
@ -356,8 +355,7 @@ static int nfc_mei_phy_enable(void *phy_id)
|
|||
goto err;
|
||||
}
|
||||
|
||||
r = mei_cldev_register_event_cb(phy->cldev, BIT(MEI_CL_EVENT_RX),
|
||||
nfc_mei_event_cb, phy);
|
||||
r = mei_cldev_register_rx_cb(phy->cldev, nfc_mei_rx_cb);
|
||||
if (r) {
|
||||
pr_err("Event cb registration failed %d\n", r);
|
||||
goto err;
|
||||
|
|
|
@ -82,28 +82,7 @@ static struct mei_cl_driver microread_driver = {
|
|||
.remove = microread_mei_remove,
|
||||
};
|
||||
|
||||
static int microread_mei_init(void)
|
||||
{
|
||||
int r;
|
||||
|
||||
pr_debug(DRIVER_DESC ": %s\n", __func__);
|
||||
|
||||
r = mei_cldev_driver_register(µread_driver);
|
||||
if (r) {
|
||||
pr_err(MICROREAD_DRIVER_NAME ": driver registration failed\n");
|
||||
return r;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void microread_mei_exit(void)
|
||||
{
|
||||
mei_cldev_driver_unregister(µread_driver);
|
||||
}
|
||||
|
||||
module_init(microread_mei_init);
|
||||
module_exit(microread_mei_exit);
|
||||
module_mei_cl_driver(microread_driver);
|
||||
|
||||
MODULE_LICENSE("GPL");
|
||||
MODULE_DESCRIPTION(DRIVER_DESC);
|
||||
|
|
|
@ -82,28 +82,7 @@ static struct mei_cl_driver pn544_driver = {
|
|||
.remove = pn544_mei_remove,
|
||||
};
|
||||
|
||||
static int pn544_mei_init(void)
|
||||
{
|
||||
int r;
|
||||
|
||||
pr_debug(DRIVER_DESC ": %s\n", __func__);
|
||||
|
||||
r = mei_cldev_driver_register(&pn544_driver);
|
||||
if (r) {
|
||||
pr_err(PN544_DRIVER_NAME ": driver registration failed\n");
|
||||
return r;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void pn544_mei_exit(void)
|
||||
{
|
||||
mei_cldev_driver_unregister(&pn544_driver);
|
||||
}
|
||||
|
||||
module_init(pn544_mei_init);
|
||||
module_exit(pn544_mei_exit);
|
||||
module_mei_cl_driver(pn544_driver);
|
||||
|
||||
MODULE_LICENSE("GPL");
|
||||
MODULE_DESCRIPTION(DRIVER_DESC);
|
||||
|
|
|
@ -35,6 +35,16 @@ config NVMEM_LPC18XX_EEPROM
|
|||
To compile this driver as a module, choose M here: the module
|
||||
will be called nvmem_lpc18xx_eeprom.
|
||||
|
||||
config NVMEM_LPC18XX_OTP
|
||||
tristate "NXP LPC18XX OTP Memory Support"
|
||||
depends on ARCH_LPC18XX || COMPILE_TEST
|
||||
depends on HAS_IOMEM
|
||||
help
|
||||
Say Y here to include support for NXP LPC18xx OTP memory found on
|
||||
all LPC18xx and LPC43xx devices.
|
||||
To compile this driver as a module, choose M here: the module
|
||||
will be called nvmem_lpc18xx_otp.
|
||||
|
||||
config NVMEM_MXS_OCOTP
|
||||
tristate "Freescale MXS On-Chip OTP Memory Support"
|
||||
depends on ARCH_MXS || COMPILE_TEST
|
||||
|
@ -80,6 +90,18 @@ config ROCKCHIP_EFUSE
|
|||
This driver can also be built as a module. If so, the module
|
||||
will be called nvmem_rockchip_efuse.
|
||||
|
||||
config NVMEM_BCM_OCOTP
|
||||
tristate "Broadcom On-Chip OTP Controller support"
|
||||
depends on ARCH_BCM_IPROC || COMPILE_TEST
|
||||
depends on HAS_IOMEM
|
||||
default ARCH_BCM_IPROC
|
||||
help
|
||||
Say y here to enable read/write access to the Broadcom OTP
|
||||
controller.
|
||||
|
||||
This driver can also be built as a module. If so, the module
|
||||
will be called nvmem-bcm-ocotp.
|
||||
|
||||
config NVMEM_SUNXI_SID
|
||||
tristate "Allwinner SoCs SID support"
|
||||
depends on ARCH_SUNXI
|
||||
|
|
|
@ -6,10 +6,14 @@ obj-$(CONFIG_NVMEM) += nvmem_core.o
|
|||
nvmem_core-y := core.o
|
||||
|
||||
# Devices
|
||||
obj-$(CONFIG_NVMEM_BCM_OCOTP) += nvmem-bcm-ocotp.o
|
||||
nvmem-bcm-ocotp-y := bcm-ocotp.o
|
||||
obj-$(CONFIG_NVMEM_IMX_OCOTP) += nvmem-imx-ocotp.o
|
||||
nvmem-imx-ocotp-y := imx-ocotp.o
|
||||
obj-$(CONFIG_NVMEM_LPC18XX_EEPROM) += nvmem_lpc18xx_eeprom.o
|
||||
nvmem_lpc18xx_eeprom-y := lpc18xx_eeprom.o
|
||||
obj-$(CONFIG_NVMEM_LPC18XX_OTP) += nvmem_lpc18xx_otp.o
|
||||
nvmem_lpc18xx_otp-y := lpc18xx_otp.o
|
||||
obj-$(CONFIG_NVMEM_MXS_OCOTP) += nvmem-mxs-ocotp.o
|
||||
nvmem-mxs-ocotp-y := mxs-ocotp.o
|
||||
obj-$(CONFIG_MTK_EFUSE) += nvmem_mtk-efuse.o
|
||||
|
|
|
@ -0,0 +1,335 @@
|
|||
/*
|
||||
* Copyright (C) 2016 Broadcom
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or
|
||||
* modify it under the terms of the GNU General Public License as
|
||||
* published by the Free Software Foundation version 2.
|
||||
*
|
||||
* This program is distributed "as is" WITHOUT ANY WARRANTY of any
|
||||
* kind, whether express or implied; without even the implied warranty
|
||||
* of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
* GNU General Public License for more details.
|
||||
*/
|
||||
|
||||
#include <linux/delay.h>
|
||||
#include <linux/device.h>
|
||||
#include <linux/io.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/nvmem-provider.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/of_address.h>
|
||||
#include <linux/platform_device.h>
|
||||
|
||||
/*
|
||||
* # of tries for OTP Status. The time to execute a command varies. The slowest
|
||||
* commands are writes which also vary based on the # of bits turned on. Writing
|
||||
* 0xffffffff takes ~3800 us.
|
||||
*/
|
||||
#define OTPC_RETRIES 5000
|
||||
|
||||
/* Sequence to enable OTP program */
|
||||
#define OTPC_PROG_EN_SEQ { 0xf, 0x4, 0x8, 0xd }
|
||||
|
||||
/* OTPC Commands */
|
||||
#define OTPC_CMD_READ 0x0
|
||||
#define OTPC_CMD_OTP_PROG_ENABLE 0x2
|
||||
#define OTPC_CMD_OTP_PROG_DISABLE 0x3
|
||||
#define OTPC_CMD_PROGRAM 0xA
|
||||
|
||||
/* OTPC Status Bits */
|
||||
#define OTPC_STAT_CMD_DONE BIT(1)
|
||||
#define OTPC_STAT_PROG_OK BIT(2)
|
||||
|
||||
/* OTPC register definition */
|
||||
#define OTPC_MODE_REG_OFFSET 0x0
|
||||
#define OTPC_MODE_REG_OTPC_MODE 0
|
||||
#define OTPC_COMMAND_OFFSET 0x4
|
||||
#define OTPC_COMMAND_COMMAND_WIDTH 6
|
||||
#define OTPC_CMD_START_OFFSET 0x8
|
||||
#define OTPC_CMD_START_START 0
|
||||
#define OTPC_CPU_STATUS_OFFSET 0xc
|
||||
#define OTPC_CPUADDR_REG_OFFSET 0x28
|
||||
#define OTPC_CPUADDR_REG_OTPC_CPU_ADDRESS_WIDTH 16
|
||||
#define OTPC_CPU_WRITE_REG_OFFSET 0x2c
|
||||
|
||||
#define OTPC_CMD_MASK (BIT(OTPC_COMMAND_COMMAND_WIDTH) - 1)
|
||||
#define OTPC_ADDR_MASK (BIT(OTPC_CPUADDR_REG_OTPC_CPU_ADDRESS_WIDTH) - 1)
|
||||
|
||||
|
||||
struct otpc_map {
|
||||
/* in words. */
|
||||
u32 otpc_row_size;
|
||||
/* 128 bit row / 4 words support. */
|
||||
u16 data_r_offset[4];
|
||||
/* 128 bit row / 4 words support. */
|
||||
u16 data_w_offset[4];
|
||||
};
|
||||
|
||||
static struct otpc_map otp_map = {
|
||||
.otpc_row_size = 1,
|
||||
.data_r_offset = {0x10},
|
||||
.data_w_offset = {0x2c},
|
||||
};
|
||||
|
||||
static struct otpc_map otp_map_v2 = {
|
||||
.otpc_row_size = 2,
|
||||
.data_r_offset = {0x10, 0x5c},
|
||||
.data_w_offset = {0x2c, 0x64},
|
||||
};
|
||||
|
||||
struct otpc_priv {
|
||||
struct device *dev;
|
||||
void __iomem *base;
|
||||
struct otpc_map *map;
|
||||
struct nvmem_config *config;
|
||||
};
|
||||
|
||||
static inline void set_command(void __iomem *base, u32 command)
|
||||
{
|
||||
writel(command & OTPC_CMD_MASK, base + OTPC_COMMAND_OFFSET);
|
||||
}
|
||||
|
||||
static inline void set_cpu_address(void __iomem *base, u32 addr)
|
||||
{
|
||||
writel(addr & OTPC_ADDR_MASK, base + OTPC_CPUADDR_REG_OFFSET);
|
||||
}
|
||||
|
||||
static inline void set_start_bit(void __iomem *base)
|
||||
{
|
||||
writel(1 << OTPC_CMD_START_START, base + OTPC_CMD_START_OFFSET);
|
||||
}
|
||||
|
||||
static inline void reset_start_bit(void __iomem *base)
|
||||
{
|
||||
writel(0, base + OTPC_CMD_START_OFFSET);
|
||||
}
|
||||
|
||||
static inline void write_cpu_data(void __iomem *base, u32 value)
|
||||
{
|
||||
writel(value, base + OTPC_CPU_WRITE_REG_OFFSET);
|
||||
}
|
||||
|
||||
static int poll_cpu_status(void __iomem *base, u32 value)
|
||||
{
|
||||
u32 status;
|
||||
u32 retries;
|
||||
|
||||
for (retries = 0; retries < OTPC_RETRIES; retries++) {
|
||||
status = readl(base + OTPC_CPU_STATUS_OFFSET);
|
||||
if (status & value)
|
||||
break;
|
||||
udelay(1);
|
||||
}
|
||||
if (retries == OTPC_RETRIES)
|
||||
return -EAGAIN;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int enable_ocotp_program(void __iomem *base)
|
||||
{
|
||||
static const u32 vals[] = OTPC_PROG_EN_SEQ;
|
||||
int i;
|
||||
int ret;
|
||||
|
||||
/* Write the magic sequence to enable programming */
|
||||
set_command(base, OTPC_CMD_OTP_PROG_ENABLE);
|
||||
for (i = 0; i < ARRAY_SIZE(vals); i++) {
|
||||
write_cpu_data(base, vals[i]);
|
||||
set_start_bit(base);
|
||||
ret = poll_cpu_status(base, OTPC_STAT_CMD_DONE);
|
||||
reset_start_bit(base);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
||||
return poll_cpu_status(base, OTPC_STAT_PROG_OK);
|
||||
}
|
||||
|
||||
static int disable_ocotp_program(void __iomem *base)
|
||||
{
|
||||
int ret;
|
||||
|
||||
set_command(base, OTPC_CMD_OTP_PROG_DISABLE);
|
||||
set_start_bit(base);
|
||||
ret = poll_cpu_status(base, OTPC_STAT_PROG_OK);
|
||||
reset_start_bit(base);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int bcm_otpc_read(void *context, unsigned int offset, void *val,
|
||||
size_t bytes)
|
||||
{
|
||||
struct otpc_priv *priv = context;
|
||||
u32 *buf = val;
|
||||
u32 bytes_read;
|
||||
u32 address = offset / priv->config->word_size;
|
||||
int i, ret;
|
||||
|
||||
for (bytes_read = 0; bytes_read < bytes;) {
|
||||
set_command(priv->base, OTPC_CMD_READ);
|
||||
set_cpu_address(priv->base, address++);
|
||||
set_start_bit(priv->base);
|
||||
ret = poll_cpu_status(priv->base, OTPC_STAT_CMD_DONE);
|
||||
if (ret) {
|
||||
dev_err(priv->dev, "otp read error: 0x%x", ret);
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
for (i = 0; i < priv->map->otpc_row_size; i++) {
|
||||
*buf++ = readl(priv->base +
|
||||
priv->map->data_r_offset[i]);
|
||||
bytes_read += sizeof(*buf);
|
||||
}
|
||||
|
||||
reset_start_bit(priv->base);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int bcm_otpc_write(void *context, unsigned int offset, void *val,
|
||||
size_t bytes)
|
||||
{
|
||||
struct otpc_priv *priv = context;
|
||||
u32 *buf = val;
|
||||
u32 bytes_written;
|
||||
u32 address = offset / priv->config->word_size;
|
||||
int i, ret;
|
||||
|
||||
if (offset % priv->config->word_size)
|
||||
return -EINVAL;
|
||||
|
||||
ret = enable_ocotp_program(priv->base);
|
||||
if (ret)
|
||||
return -EIO;
|
||||
|
||||
for (bytes_written = 0; bytes_written < bytes;) {
|
||||
set_command(priv->base, OTPC_CMD_PROGRAM);
|
||||
set_cpu_address(priv->base, address++);
|
||||
for (i = 0; i < priv->map->otpc_row_size; i++) {
|
||||
writel(*buf, priv->base + priv->map->data_r_offset[i]);
|
||||
buf++;
|
||||
bytes_written += sizeof(*buf);
|
||||
}
|
||||
set_start_bit(priv->base);
|
||||
ret = poll_cpu_status(priv->base, OTPC_STAT_CMD_DONE);
|
||||
reset_start_bit(priv->base);
|
||||
if (ret) {
|
||||
dev_err(priv->dev, "otp write error: 0x%x", ret);
|
||||
return -EIO;
|
||||
}
|
||||
}
|
||||
|
||||
disable_ocotp_program(priv->base);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct nvmem_config bcm_otpc_nvmem_config = {
|
||||
.name = "bcm-ocotp",
|
||||
.read_only = false,
|
||||
.word_size = 4,
|
||||
.stride = 4,
|
||||
.owner = THIS_MODULE,
|
||||
.reg_read = bcm_otpc_read,
|
||||
.reg_write = bcm_otpc_write,
|
||||
};
|
||||
|
||||
static const struct of_device_id bcm_otpc_dt_ids[] = {
|
||||
{ .compatible = "brcm,ocotp" },
|
||||
{ .compatible = "brcm,ocotp-v2" },
|
||||
{ },
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, bcm_otpc_dt_ids);
|
||||
|
||||
static int bcm_otpc_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct device *dev = &pdev->dev;
|
||||
struct device_node *dn = dev->of_node;
|
||||
struct resource *res;
|
||||
struct otpc_priv *priv;
|
||||
struct nvmem_device *nvmem;
|
||||
int err;
|
||||
u32 num_words;
|
||||
|
||||
priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
|
||||
if (!priv)
|
||||
return -ENOMEM;
|
||||
|
||||
if (of_device_is_compatible(dev->of_node, "brcm,ocotp"))
|
||||
priv->map = &otp_map;
|
||||
else if (of_device_is_compatible(dev->of_node, "brcm,ocotp-v2"))
|
||||
priv->map = &otp_map_v2;
|
||||
else {
|
||||
dev_err(&pdev->dev,
|
||||
"%s otpc config map not defined\n", __func__);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* Get OTP base address register. */
|
||||
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
|
||||
priv->base = devm_ioremap_resource(dev, res);
|
||||
if (IS_ERR(priv->base)) {
|
||||
dev_err(dev, "unable to map I/O memory\n");
|
||||
return PTR_ERR(priv->base);
|
||||
}
|
||||
|
||||
/* Enable CPU access to OTPC. */
|
||||
writel(readl(priv->base + OTPC_MODE_REG_OFFSET) |
|
||||
BIT(OTPC_MODE_REG_OTPC_MODE),
|
||||
priv->base + OTPC_MODE_REG_OFFSET);
|
||||
reset_start_bit(priv->base);
|
||||
|
||||
/* Read size of memory in words. */
|
||||
err = of_property_read_u32(dn, "brcm,ocotp-size", &num_words);
|
||||
if (err) {
|
||||
dev_err(dev, "size parameter not specified\n");
|
||||
return -EINVAL;
|
||||
} else if (num_words == 0) {
|
||||
dev_err(dev, "size must be > 0\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
bcm_otpc_nvmem_config.size = 4 * num_words;
|
||||
bcm_otpc_nvmem_config.dev = dev;
|
||||
bcm_otpc_nvmem_config.priv = priv;
|
||||
|
||||
if (of_device_is_compatible(dev->of_node, "brcm,ocotp-v2")) {
|
||||
bcm_otpc_nvmem_config.word_size = 8;
|
||||
bcm_otpc_nvmem_config.stride = 8;
|
||||
}
|
||||
|
||||
priv->config = &bcm_otpc_nvmem_config;
|
||||
|
||||
nvmem = nvmem_register(&bcm_otpc_nvmem_config);
|
||||
if (IS_ERR(nvmem)) {
|
||||
dev_err(dev, "error registering nvmem config\n");
|
||||
return PTR_ERR(nvmem);
|
||||
}
|
||||
|
||||
platform_set_drvdata(pdev, nvmem);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int bcm_otpc_remove(struct platform_device *pdev)
|
||||
{
|
||||
struct nvmem_device *nvmem = platform_get_drvdata(pdev);
|
||||
|
||||
return nvmem_unregister(nvmem);
|
||||
}
|
||||
|
||||
static struct platform_driver bcm_otpc_driver = {
|
||||
.probe = bcm_otpc_probe,
|
||||
.remove = bcm_otpc_remove,
|
||||
.driver = {
|
||||
.name = "brcm-otpc",
|
||||
.of_match_table = bcm_otpc_dt_ids,
|
||||
},
|
||||
};
|
||||
module_platform_driver(bcm_otpc_driver);
|
||||
|
||||
MODULE_DESCRIPTION("Broadcom OTPC driver");
|
||||
MODULE_LICENSE("GPL v2");
|
|
@ -0,0 +1,124 @@
|
|||
/*
|
||||
* NXP LPC18xx/43xx OTP memory NVMEM driver
|
||||
*
|
||||
* Copyright (c) 2016 Joachim Eastwood <manabian@gmail.com>
|
||||
*
|
||||
* Based on the imx ocotp driver,
|
||||
* Copyright (c) 2015 Pengutronix, Philipp Zabel <p.zabel@pengutronix.de>
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2
|
||||
* as published by the Free Software Foundation.
|
||||
*
|
||||
* TODO: add support for writing OTP register via API in boot ROM.
|
||||
*/
|
||||
|
||||
#include <linux/io.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/nvmem-provider.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/of_device.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/slab.h>
|
||||
|
||||
/*
|
||||
* LPC18xx OTP memory contains 4 banks with 4 32-bit words. Bank 0 starts
|
||||
* at offset 0 from the base.
|
||||
*
|
||||
* Bank 0 contains the part ID for Flashless devices and is reseverd for
|
||||
* devices with Flash.
|
||||
* Bank 1/2 is generale purpose or AES key storage for secure devices.
|
||||
* Bank 3 contains control data, USB ID and generale purpose words.
|
||||
*/
|
||||
#define LPC18XX_OTP_NUM_BANKS 4
|
||||
#define LPC18XX_OTP_WORDS_PER_BANK 4
|
||||
#define LPC18XX_OTP_WORD_SIZE sizeof(u32)
|
||||
#define LPC18XX_OTP_SIZE (LPC18XX_OTP_NUM_BANKS * \
|
||||
LPC18XX_OTP_WORDS_PER_BANK * \
|
||||
LPC18XX_OTP_WORD_SIZE)
|
||||
|
||||
struct lpc18xx_otp {
|
||||
void __iomem *base;
|
||||
};
|
||||
|
||||
static int lpc18xx_otp_read(void *context, unsigned int offset,
|
||||
void *val, size_t bytes)
|
||||
{
|
||||
struct lpc18xx_otp *otp = context;
|
||||
unsigned int count = bytes >> 2;
|
||||
u32 index = offset >> 2;
|
||||
u32 *buf = val;
|
||||
int i;
|
||||
|
||||
if (count > (LPC18XX_OTP_SIZE - index))
|
||||
count = LPC18XX_OTP_SIZE - index;
|
||||
|
||||
for (i = index; i < (index + count); i++)
|
||||
*buf++ = readl(otp->base + i * LPC18XX_OTP_WORD_SIZE);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct nvmem_config lpc18xx_otp_nvmem_config = {
|
||||
.name = "lpc18xx-otp",
|
||||
.read_only = true,
|
||||
.word_size = LPC18XX_OTP_WORD_SIZE,
|
||||
.stride = LPC18XX_OTP_WORD_SIZE,
|
||||
.owner = THIS_MODULE,
|
||||
.reg_read = lpc18xx_otp_read,
|
||||
};
|
||||
|
||||
static int lpc18xx_otp_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct nvmem_device *nvmem;
|
||||
struct lpc18xx_otp *otp;
|
||||
struct resource *res;
|
||||
|
||||
otp = devm_kzalloc(&pdev->dev, sizeof(*otp), GFP_KERNEL);
|
||||
if (!otp)
|
||||
return -ENOMEM;
|
||||
|
||||
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
|
||||
otp->base = devm_ioremap_resource(&pdev->dev, res);
|
||||
if (IS_ERR(otp->base))
|
||||
return PTR_ERR(otp->base);
|
||||
|
||||
lpc18xx_otp_nvmem_config.size = LPC18XX_OTP_SIZE;
|
||||
lpc18xx_otp_nvmem_config.dev = &pdev->dev;
|
||||
lpc18xx_otp_nvmem_config.priv = otp;
|
||||
|
||||
nvmem = nvmem_register(&lpc18xx_otp_nvmem_config);
|
||||
if (IS_ERR(nvmem))
|
||||
return PTR_ERR(nvmem);
|
||||
|
||||
platform_set_drvdata(pdev, nvmem);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int lpc18xx_otp_remove(struct platform_device *pdev)
|
||||
{
|
||||
struct nvmem_device *nvmem = platform_get_drvdata(pdev);
|
||||
|
||||
return nvmem_unregister(nvmem);
|
||||
}
|
||||
|
||||
static const struct of_device_id lpc18xx_otp_dt_ids[] = {
|
||||
{ .compatible = "nxp,lpc1850-otp" },
|
||||
{ },
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, lpc18xx_otp_dt_ids);
|
||||
|
||||
static struct platform_driver lpc18xx_otp_driver = {
|
||||
.probe = lpc18xx_otp_probe,
|
||||
.remove = lpc18xx_otp_remove,
|
||||
.driver = {
|
||||
.name = "lpc18xx_otp",
|
||||
.of_match_table = lpc18xx_otp_dt_ids,
|
||||
},
|
||||
};
|
||||
module_platform_driver(lpc18xx_otp_driver);
|
||||
|
||||
MODULE_AUTHOR("Joachim Eastwoood <manabian@gmail.com>");
|
||||
MODULE_DESCRIPTION("NXP LPC18xx OTP NVMEM driver");
|
||||
MODULE_LICENSE("GPL v2");
|
|
@ -58,6 +58,41 @@ struct of_overlay {
|
|||
static int of_overlay_apply_one(struct of_overlay *ov,
|
||||
struct device_node *target, const struct device_node *overlay);
|
||||
|
||||
static BLOCKING_NOTIFIER_HEAD(of_overlay_chain);
|
||||
|
||||
int of_overlay_notifier_register(struct notifier_block *nb)
|
||||
{
|
||||
return blocking_notifier_chain_register(&of_overlay_chain, nb);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(of_overlay_notifier_register);
|
||||
|
||||
int of_overlay_notifier_unregister(struct notifier_block *nb)
|
||||
{
|
||||
return blocking_notifier_chain_unregister(&of_overlay_chain, nb);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(of_overlay_notifier_unregister);
|
||||
|
||||
static int of_overlay_notify(struct of_overlay *ov,
|
||||
enum of_overlay_notify_action action)
|
||||
{
|
||||
struct of_overlay_notify_data nd;
|
||||
int i, ret;
|
||||
|
||||
for (i = 0; i < ov->count; i++) {
|
||||
struct of_overlay_info *ovinfo = &ov->ovinfo_tab[i];
|
||||
|
||||
nd.target = ovinfo->target;
|
||||
nd.overlay = ovinfo->overlay;
|
||||
|
||||
ret = blocking_notifier_call_chain(&of_overlay_chain,
|
||||
action, &nd);
|
||||
if (ret)
|
||||
return notifier_to_errno(ret);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int of_overlay_apply_single_property(struct of_overlay *ov,
|
||||
struct device_node *target, struct property *prop)
|
||||
{
|
||||
|
@ -368,6 +403,13 @@ int of_overlay_create(struct device_node *tree)
|
|||
goto err_free_idr;
|
||||
}
|
||||
|
||||
err = of_overlay_notify(ov, OF_OVERLAY_PRE_APPLY);
|
||||
if (err < 0) {
|
||||
pr_err("%s: Pre-apply notifier failed (err=%d)\n",
|
||||
__func__, err);
|
||||
goto err_free_idr;
|
||||
}
|
||||
|
||||
/* apply the overlay */
|
||||
err = of_overlay_apply(ov);
|
||||
if (err)
|
||||
|
@ -382,6 +424,8 @@ int of_overlay_create(struct device_node *tree)
|
|||
/* add to the tail of the overlay list */
|
||||
list_add_tail(&ov->node, &ov_list);
|
||||
|
||||
of_overlay_notify(ov, OF_OVERLAY_POST_APPLY);
|
||||
|
||||
mutex_unlock(&of_mutex);
|
||||
|
||||
return id;
|
||||
|
@ -498,9 +542,10 @@ int of_overlay_destroy(int id)
|
|||
goto out;
|
||||
}
|
||||
|
||||
|
||||
of_overlay_notify(ov, OF_OVERLAY_PRE_REMOVE);
|
||||
list_del(&ov->node);
|
||||
__of_changeset_revert(&ov->cset);
|
||||
of_overlay_notify(ov, OF_OVERLAY_POST_REMOVE);
|
||||
of_free_overlay_info(ov);
|
||||
idr_remove(&ov_idr, id);
|
||||
of_changeset_destroy(&ov->cset);
|
||||
|
|
|
@ -308,10 +308,8 @@ static ssize_t goldfish_pipe_read_write(struct file *filp, char __user *buffer,
|
|||
* returns a small amount, then there's no need to pin that
|
||||
* much memory to the process.
|
||||
*/
|
||||
down_read(¤t->mm->mmap_sem);
|
||||
ret = get_user_pages(address, 1, is_write ? 0 : FOLL_WRITE,
|
||||
&page, NULL);
|
||||
up_read(¤t->mm->mmap_sem);
|
||||
ret = get_user_pages_unlocked(address, 1, &page,
|
||||
is_write ? 0 : FOLL_WRITE);
|
||||
if (ret < 0)
|
||||
break;
|
||||
|
||||
|
|
|
@ -10,7 +10,7 @@
|
|||
#include <linux/uaccess.h>
|
||||
#include <linux/miscdevice.h>
|
||||
#include <linux/gfp.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/ioctl.h>
|
||||
#include <linux/fs.h>
|
||||
#include <asm/compat.h>
|
||||
|
@ -126,4 +126,4 @@ static struct miscdevice sclp_ctl_device = {
|
|||
.name = "sclp",
|
||||
.fops = &sclp_ctl_fops,
|
||||
};
|
||||
module_misc_device(sclp_ctl_device);
|
||||
builtin_misc_device(sclp_ctl_device);
|
||||
|
|
|
@ -1,11 +1,11 @@
|
|||
/*
|
||||
* Thunderbolt Cactus Ridge driver - NHI registers
|
||||
* Thunderbolt driver - NHI registers
|
||||
*
|
||||
* Copyright (c) 2014 Andreas Noever <andreas.noever@gmail.com>
|
||||
*/
|
||||
|
||||
#ifndef DSL3510_REGS_H_
|
||||
#define DSL3510_REGS_H_
|
||||
#ifndef NHI_REGS_H_
|
||||
#define NHI_REGS_H_
|
||||
|
||||
#include <linux/types.h>
|
||||
|
||||
|
|
|
@ -155,4 +155,13 @@ config UIO_MF624
|
|||
|
||||
If you compile this as a module, it will be called uio_mf624.
|
||||
|
||||
config UIO_HV_GENERIC
|
||||
tristate "Generic driver for Hyper-V VMBus"
|
||||
depends on HYPERV
|
||||
help
|
||||
Generic driver that you can bind, dynamically, to any
|
||||
Hyper-V VMBus device. It is useful to provide direct access
|
||||
to network and storage devices from userspace.
|
||||
|
||||
If you compile this as a module, it will be called uio_hv_generic.
|
||||
endif
|
||||
|
|
|
@ -9,3 +9,4 @@ obj-$(CONFIG_UIO_NETX) += uio_netx.o
|
|||
obj-$(CONFIG_UIO_PRUSS) += uio_pruss.o
|
||||
obj-$(CONFIG_UIO_MF624) += uio_mf624.o
|
||||
obj-$(CONFIG_UIO_FSL_ELBC_GPCM) += uio_fsl_elbc_gpcm.o
|
||||
obj-$(CONFIG_UIO_HV_GENERIC) += uio_hv_generic.o
|
||||
|
|
|
@ -0,0 +1,218 @@
|
|||
/*
|
||||
* uio_hv_generic - generic UIO driver for VMBus
|
||||
*
|
||||
* Copyright (c) 2013-2016 Brocade Communications Systems, Inc.
|
||||
* Copyright (c) 2016, Microsoft Corporation.
|
||||
*
|
||||
*
|
||||
* This work is licensed under the terms of the GNU GPL, version 2.
|
||||
*
|
||||
* Since the driver does not declare any device ids, you must allocate
|
||||
* id and bind the device to the driver yourself. For example:
|
||||
*
|
||||
* # echo "f8615163-df3e-46c5-913f-f2d2f965ed0e" \
|
||||
* > /sys/bus/vmbus/drivers/uio_hv_generic
|
||||
* # echo -n vmbus-ed963694-e847-4b2a-85af-bc9cfc11d6f3 \
|
||||
* > /sys/bus/vmbus/drivers/hv_netvsc/unbind
|
||||
* # echo -n vmbus-ed963694-e847-4b2a-85af-bc9cfc11d6f3 \
|
||||
* > /sys/bus/vmbus/drivers/uio_hv_generic/bind
|
||||
*/
|
||||
|
||||
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
|
||||
|
||||
#include <linux/device.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/uio_driver.h>
|
||||
#include <linux/netdevice.h>
|
||||
#include <linux/if_ether.h>
|
||||
#include <linux/skbuff.h>
|
||||
#include <linux/hyperv.h>
|
||||
#include <linux/vmalloc.h>
|
||||
#include <linux/slab.h>
|
||||
|
||||
#include "../hv/hyperv_vmbus.h"
|
||||
|
||||
#define DRIVER_VERSION "0.02.0"
|
||||
#define DRIVER_AUTHOR "Stephen Hemminger <sthemmin at microsoft.com>"
|
||||
#define DRIVER_DESC "Generic UIO driver for VMBus devices"
|
||||
|
||||
/*
|
||||
* List of resources to be mapped to user space
|
||||
* can be extended up to MAX_UIO_MAPS(5) items
|
||||
*/
|
||||
enum hv_uio_map {
|
||||
TXRX_RING_MAP = 0,
|
||||
INT_PAGE_MAP,
|
||||
MON_PAGE_MAP,
|
||||
};
|
||||
|
||||
#define HV_RING_SIZE 512
|
||||
|
||||
struct hv_uio_private_data {
|
||||
struct uio_info info;
|
||||
struct hv_device *device;
|
||||
};
|
||||
|
||||
static int
|
||||
hv_uio_mmap(struct uio_info *info, struct vm_area_struct *vma)
|
||||
{
|
||||
int mi;
|
||||
|
||||
if (vma->vm_pgoff >= MAX_UIO_MAPS)
|
||||
return -EINVAL;
|
||||
|
||||
if (info->mem[vma->vm_pgoff].size == 0)
|
||||
return -EINVAL;
|
||||
|
||||
mi = (int)vma->vm_pgoff;
|
||||
|
||||
return remap_pfn_range(vma, vma->vm_start,
|
||||
info->mem[mi].addr >> PAGE_SHIFT,
|
||||
vma->vm_end - vma->vm_start, vma->vm_page_prot);
|
||||
}
|
||||
|
||||
/*
|
||||
* This is the irqcontrol callback to be registered to uio_info.
|
||||
* It can be used to disable/enable interrupt from user space processes.
|
||||
*
|
||||
* @param info
|
||||
* pointer to uio_info.
|
||||
* @param irq_state
|
||||
* state value. 1 to enable interrupt, 0 to disable interrupt.
|
||||
*/
|
||||
static int
|
||||
hv_uio_irqcontrol(struct uio_info *info, s32 irq_state)
|
||||
{
|
||||
struct hv_uio_private_data *pdata = info->priv;
|
||||
struct hv_device *dev = pdata->device;
|
||||
|
||||
dev->channel->inbound.ring_buffer->interrupt_mask = !irq_state;
|
||||
virt_mb();
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* Callback from vmbus_event when something is in inbound ring.
|
||||
*/
|
||||
static void hv_uio_channel_cb(void *context)
|
||||
{
|
||||
struct hv_uio_private_data *pdata = context;
|
||||
struct hv_device *dev = pdata->device;
|
||||
|
||||
dev->channel->inbound.ring_buffer->interrupt_mask = 1;
|
||||
virt_mb();
|
||||
|
||||
uio_event_notify(&pdata->info);
|
||||
}
|
||||
|
||||
static int
|
||||
hv_uio_probe(struct hv_device *dev,
|
||||
const struct hv_vmbus_device_id *dev_id)
|
||||
{
|
||||
struct hv_uio_private_data *pdata;
|
||||
int ret;
|
||||
|
||||
pdata = kzalloc(sizeof(*pdata), GFP_KERNEL);
|
||||
if (!pdata)
|
||||
return -ENOMEM;
|
||||
|
||||
ret = vmbus_open(dev->channel, HV_RING_SIZE * PAGE_SIZE,
|
||||
HV_RING_SIZE * PAGE_SIZE, NULL, 0,
|
||||
hv_uio_channel_cb, pdata);
|
||||
if (ret)
|
||||
goto fail;
|
||||
|
||||
dev->channel->inbound.ring_buffer->interrupt_mask = 1;
|
||||
dev->channel->batched_reading = false;
|
||||
|
||||
/* Fill general uio info */
|
||||
pdata->info.name = "uio_hv_generic";
|
||||
pdata->info.version = DRIVER_VERSION;
|
||||
pdata->info.irqcontrol = hv_uio_irqcontrol;
|
||||
pdata->info.mmap = hv_uio_mmap;
|
||||
pdata->info.irq = UIO_IRQ_CUSTOM;
|
||||
|
||||
/* mem resources */
|
||||
pdata->info.mem[TXRX_RING_MAP].name = "txrx_rings";
|
||||
pdata->info.mem[TXRX_RING_MAP].addr
|
||||
= virt_to_phys(dev->channel->ringbuffer_pages);
|
||||
pdata->info.mem[TXRX_RING_MAP].size
|
||||
= dev->channel->ringbuffer_pagecount * PAGE_SIZE;
|
||||
pdata->info.mem[TXRX_RING_MAP].memtype = UIO_MEM_LOGICAL;
|
||||
|
||||
pdata->info.mem[INT_PAGE_MAP].name = "int_page";
|
||||
pdata->info.mem[INT_PAGE_MAP].addr =
|
||||
virt_to_phys(vmbus_connection.int_page);
|
||||
pdata->info.mem[INT_PAGE_MAP].size = PAGE_SIZE;
|
||||
pdata->info.mem[INT_PAGE_MAP].memtype = UIO_MEM_LOGICAL;
|
||||
|
||||
pdata->info.mem[MON_PAGE_MAP].name = "monitor_pages";
|
||||
pdata->info.mem[MON_PAGE_MAP].addr =
|
||||
virt_to_phys(vmbus_connection.monitor_pages[1]);
|
||||
pdata->info.mem[MON_PAGE_MAP].size = PAGE_SIZE;
|
||||
pdata->info.mem[MON_PAGE_MAP].memtype = UIO_MEM_LOGICAL;
|
||||
|
||||
pdata->info.priv = pdata;
|
||||
pdata->device = dev;
|
||||
|
||||
ret = uio_register_device(&dev->device, &pdata->info);
|
||||
if (ret) {
|
||||
dev_err(&dev->device, "hv_uio register failed\n");
|
||||
goto fail_close;
|
||||
}
|
||||
|
||||
hv_set_drvdata(dev, pdata);
|
||||
|
||||
return 0;
|
||||
|
||||
fail_close:
|
||||
vmbus_close(dev->channel);
|
||||
fail:
|
||||
kfree(pdata);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int
|
||||
hv_uio_remove(struct hv_device *dev)
|
||||
{
|
||||
struct hv_uio_private_data *pdata = hv_get_drvdata(dev);
|
||||
|
||||
if (!pdata)
|
||||
return 0;
|
||||
|
||||
uio_unregister_device(&pdata->info);
|
||||
hv_set_drvdata(dev, NULL);
|
||||
vmbus_close(dev->channel);
|
||||
kfree(pdata);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct hv_driver hv_uio_drv = {
|
||||
.name = "uio_hv_generic",
|
||||
.id_table = NULL, /* only dynamic id's */
|
||||
.probe = hv_uio_probe,
|
||||
.remove = hv_uio_remove,
|
||||
};
|
||||
|
||||
static int __init
|
||||
hyperv_module_init(void)
|
||||
{
|
||||
return vmbus_driver_register(&hv_uio_drv);
|
||||
}
|
||||
|
||||
static void __exit
|
||||
hyperv_module_exit(void)
|
||||
{
|
||||
vmbus_driver_unregister(&hv_uio_drv);
|
||||
}
|
||||
|
||||
module_init(hyperv_module_init);
|
||||
module_exit(hyperv_module_exit);
|
||||
|
||||
MODULE_VERSION(DRIVER_VERSION);
|
||||
MODULE_LICENSE("GPL v2");
|
||||
MODULE_AUTHOR(DRIVER_AUTHOR);
|
||||
MODULE_DESCRIPTION(DRIVER_DESC);
|
|
@ -111,6 +111,7 @@ static void pruss_cleanup(struct device *dev, struct uio_pruss_dev *gdev)
|
|||
gdev->sram_vaddr,
|
||||
sram_pool_sz);
|
||||
kfree(gdev->info);
|
||||
clk_disable(gdev->pruss_clk);
|
||||
clk_put(gdev->pruss_clk);
|
||||
kfree(gdev);
|
||||
}
|
||||
|
@ -143,7 +144,14 @@ static int pruss_probe(struct platform_device *pdev)
|
|||
kfree(gdev);
|
||||
return ret;
|
||||
} else {
|
||||
clk_enable(gdev->pruss_clk);
|
||||
ret = clk_enable(gdev->pruss_clk);
|
||||
if (ret) {
|
||||
dev_err(dev, "Failed to enable clock\n");
|
||||
clk_put(gdev->pruss_clk);
|
||||
kfree(gdev->info);
|
||||
kfree(gdev);
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
||||
regs_prussio = platform_get_resource(pdev, IORESOURCE_MEM, 0);
|
||||
|
|
|
@ -410,11 +410,11 @@ static void mei_wdt_unregister_work(struct work_struct *work)
|
|||
}
|
||||
|
||||
/**
|
||||
* mei_wdt_event_rx - callback for data receive
|
||||
* mei_wdt_rx - callback for data receive
|
||||
*
|
||||
* @cldev: bus device
|
||||
*/
|
||||
static void mei_wdt_event_rx(struct mei_cl_device *cldev)
|
||||
static void mei_wdt_rx(struct mei_cl_device *cldev)
|
||||
{
|
||||
struct mei_wdt *wdt = mei_cldev_get_drvdata(cldev);
|
||||
struct mei_wdt_start_response res;
|
||||
|
@ -482,11 +482,11 @@ out:
|
|||
}
|
||||
|
||||
/*
|
||||
* mei_wdt_notify_event - callback for event notification
|
||||
* mei_wdt_notif - callback for event notification
|
||||
*
|
||||
* @cldev: bus device
|
||||
*/
|
||||
static void mei_wdt_notify_event(struct mei_cl_device *cldev)
|
||||
static void mei_wdt_notif(struct mei_cl_device *cldev)
|
||||
{
|
||||
struct mei_wdt *wdt = mei_cldev_get_drvdata(cldev);
|
||||
|
||||
|
@ -496,23 +496,6 @@ static void mei_wdt_notify_event(struct mei_cl_device *cldev)
|
|||
mei_wdt_register(wdt);
|
||||
}
|
||||
|
||||
/**
|
||||
* mei_wdt_event - callback for event receive
|
||||
*
|
||||
* @cldev: bus device
|
||||
* @events: event mask
|
||||
* @context: callback context
|
||||
*/
|
||||
static void mei_wdt_event(struct mei_cl_device *cldev,
|
||||
u32 events, void *context)
|
||||
{
|
||||
if (events & BIT(MEI_CL_EVENT_RX))
|
||||
mei_wdt_event_rx(cldev);
|
||||
|
||||
if (events & BIT(MEI_CL_EVENT_NOTIF))
|
||||
mei_wdt_notify_event(cldev);
|
||||
}
|
||||
|
||||
#if IS_ENABLED(CONFIG_DEBUG_FS)
|
||||
|
||||
static ssize_t mei_dbgfs_read_activation(struct file *file, char __user *ubuf,
|
||||
|
@ -623,16 +606,17 @@ static int mei_wdt_probe(struct mei_cl_device *cldev,
|
|||
goto err_out;
|
||||
}
|
||||
|
||||
ret = mei_cldev_register_event_cb(wdt->cldev,
|
||||
BIT(MEI_CL_EVENT_RX) |
|
||||
BIT(MEI_CL_EVENT_NOTIF),
|
||||
mei_wdt_event, NULL);
|
||||
ret = mei_cldev_register_rx_cb(wdt->cldev, mei_wdt_rx);
|
||||
if (ret) {
|
||||
dev_err(&cldev->dev, "Could not reg rx event ret=%d\n", ret);
|
||||
goto err_disable;
|
||||
}
|
||||
|
||||
ret = mei_cldev_register_notif_cb(wdt->cldev, mei_wdt_notif);
|
||||
/* on legacy devices notification is not supported
|
||||
* this doesn't fail the registration for RX event
|
||||
*/
|
||||
if (ret && ret != -EOPNOTSUPP) {
|
||||
dev_err(&cldev->dev, "Could not register event ret=%d\n", ret);
|
||||
dev_err(&cldev->dev, "Could not reg notif event ret=%d\n", ret);
|
||||
goto err_disable;
|
||||
}
|
||||
|
||||
|
@ -699,25 +683,7 @@ static struct mei_cl_driver mei_wdt_driver = {
|
|||
.remove = mei_wdt_remove,
|
||||
};
|
||||
|
||||
static int __init mei_wdt_init(void)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = mei_cldev_driver_register(&mei_wdt_driver);
|
||||
if (ret) {
|
||||
pr_err(KBUILD_MODNAME ": module registration failed\n");
|
||||
return ret;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void __exit mei_wdt_exit(void)
|
||||
{
|
||||
mei_cldev_driver_unregister(&mei_wdt_driver);
|
||||
}
|
||||
|
||||
module_init(mei_wdt_init);
|
||||
module_exit(mei_wdt_exit);
|
||||
module_mei_cl_driver(mei_wdt_driver);
|
||||
|
||||
MODULE_AUTHOR("Intel Corporation");
|
||||
MODULE_LICENSE("GPL");
|
||||
|
|
|
@ -0,0 +1,60 @@
|
|||
#include <linux/device.h>
|
||||
#include <linux/fpga/fpga-mgr.h>
|
||||
|
||||
#ifndef _LINUX_FPGA_BRIDGE_H
|
||||
#define _LINUX_FPGA_BRIDGE_H
|
||||
|
||||
struct fpga_bridge;
|
||||
|
||||
/**
|
||||
* struct fpga_bridge_ops - ops for low level FPGA bridge drivers
|
||||
* @enable_show: returns the FPGA bridge's status
|
||||
* @enable_set: set a FPGA bridge as enabled or disabled
|
||||
* @fpga_bridge_remove: set FPGA into a specific state during driver remove
|
||||
*/
|
||||
struct fpga_bridge_ops {
|
||||
int (*enable_show)(struct fpga_bridge *bridge);
|
||||
int (*enable_set)(struct fpga_bridge *bridge, bool enable);
|
||||
void (*fpga_bridge_remove)(struct fpga_bridge *bridge);
|
||||
};
|
||||
|
||||
/**
|
||||
* struct fpga_bridge - FPGA bridge structure
|
||||
* @name: name of low level FPGA bridge
|
||||
* @dev: FPGA bridge device
|
||||
* @mutex: enforces exclusive reference to bridge
|
||||
* @br_ops: pointer to struct of FPGA bridge ops
|
||||
* @info: fpga image specific information
|
||||
* @node: FPGA bridge list node
|
||||
* @priv: low level driver private date
|
||||
*/
|
||||
struct fpga_bridge {
|
||||
const char *name;
|
||||
struct device dev;
|
||||
struct mutex mutex; /* for exclusive reference to bridge */
|
||||
const struct fpga_bridge_ops *br_ops;
|
||||
struct fpga_image_info *info;
|
||||
struct list_head node;
|
||||
void *priv;
|
||||
};
|
||||
|
||||
#define to_fpga_bridge(d) container_of(d, struct fpga_bridge, dev)
|
||||
|
||||
struct fpga_bridge *of_fpga_bridge_get(struct device_node *node,
|
||||
struct fpga_image_info *info);
|
||||
void fpga_bridge_put(struct fpga_bridge *bridge);
|
||||
int fpga_bridge_enable(struct fpga_bridge *bridge);
|
||||
int fpga_bridge_disable(struct fpga_bridge *bridge);
|
||||
|
||||
int fpga_bridges_enable(struct list_head *bridge_list);
|
||||
int fpga_bridges_disable(struct list_head *bridge_list);
|
||||
void fpga_bridges_put(struct list_head *bridge_list);
|
||||
int fpga_bridge_get_to_list(struct device_node *np,
|
||||
struct fpga_image_info *info,
|
||||
struct list_head *bridge_list);
|
||||
|
||||
int fpga_bridge_register(struct device *dev, const char *name,
|
||||
const struct fpga_bridge_ops *br_ops, void *priv);
|
||||
void fpga_bridge_unregister(struct device *dev);
|
||||
|
||||
#endif /* _LINUX_FPGA_BRIDGE_H */
|
|
@ -65,11 +65,26 @@ enum fpga_mgr_states {
|
|||
/*
|
||||
* FPGA Manager flags
|
||||
* FPGA_MGR_PARTIAL_RECONFIG: do partial reconfiguration if supported
|
||||
* FPGA_MGR_EXTERNAL_CONFIG: FPGA has been configured prior to Linux booting
|
||||
*/
|
||||
#define FPGA_MGR_PARTIAL_RECONFIG BIT(0)
|
||||
#define FPGA_MGR_EXTERNAL_CONFIG BIT(1)
|
||||
|
||||
/**
|
||||
* struct fpga_image_info - information specific to a FPGA image
|
||||
* @flags: boolean flags as defined above
|
||||
* @enable_timeout_us: maximum time to enable traffic through bridge (uSec)
|
||||
* @disable_timeout_us: maximum time to disable traffic through bridge (uSec)
|
||||
*/
|
||||
struct fpga_image_info {
|
||||
u32 flags;
|
||||
u32 enable_timeout_us;
|
||||
u32 disable_timeout_us;
|
||||
};
|
||||
|
||||
/**
|
||||
* struct fpga_manager_ops - ops for low level fpga manager drivers
|
||||
* @initial_header_size: Maximum number of bytes that should be passed into write_init
|
||||
* @state: returns an enum value of the FPGA's state
|
||||
* @write_init: prepare the FPGA to receive confuration data
|
||||
* @write: write count bytes of configuration data to the FPGA
|
||||
|
@ -81,11 +96,14 @@ enum fpga_mgr_states {
|
|||
* called, so leaving them out is fine.
|
||||
*/
|
||||
struct fpga_manager_ops {
|
||||
size_t initial_header_size;
|
||||
enum fpga_mgr_states (*state)(struct fpga_manager *mgr);
|
||||
int (*write_init)(struct fpga_manager *mgr, u32 flags,
|
||||
int (*write_init)(struct fpga_manager *mgr,
|
||||
struct fpga_image_info *info,
|
||||
const char *buf, size_t count);
|
||||
int (*write)(struct fpga_manager *mgr, const char *buf, size_t count);
|
||||
int (*write_complete)(struct fpga_manager *mgr, u32 flags);
|
||||
int (*write_complete)(struct fpga_manager *mgr,
|
||||
struct fpga_image_info *info);
|
||||
void (*fpga_remove)(struct fpga_manager *mgr);
|
||||
};
|
||||
|
||||
|
@ -109,14 +127,17 @@ struct fpga_manager {
|
|||
|
||||
#define to_fpga_manager(d) container_of(d, struct fpga_manager, dev)
|
||||
|
||||
int fpga_mgr_buf_load(struct fpga_manager *mgr, u32 flags,
|
||||
int fpga_mgr_buf_load(struct fpga_manager *mgr, struct fpga_image_info *info,
|
||||
const char *buf, size_t count);
|
||||
|
||||
int fpga_mgr_firmware_load(struct fpga_manager *mgr, u32 flags,
|
||||
int fpga_mgr_firmware_load(struct fpga_manager *mgr,
|
||||
struct fpga_image_info *info,
|
||||
const char *image_name);
|
||||
|
||||
struct fpga_manager *of_fpga_mgr_get(struct device_node *node);
|
||||
|
||||
struct fpga_manager *fpga_mgr_get(struct device *dev);
|
||||
|
||||
void fpga_mgr_put(struct fpga_manager *mgr);
|
||||
|
||||
int fpga_mgr_register(struct device *dev, const char *name,
|
||||
|
|
|
@ -696,7 +696,7 @@ enum vmbus_device_type {
|
|||
HV_FCOPY,
|
||||
HV_BACKUP,
|
||||
HV_DM,
|
||||
HV_UNKOWN,
|
||||
HV_UNKNOWN,
|
||||
};
|
||||
|
||||
struct vmbus_device {
|
||||
|
@ -1119,6 +1119,12 @@ struct hv_driver {
|
|||
|
||||
struct device_driver driver;
|
||||
|
||||
/* dynamic device GUID's */
|
||||
struct {
|
||||
spinlock_t lock;
|
||||
struct list_head list;
|
||||
} dynids;
|
||||
|
||||
int (*probe)(struct hv_device *, const struct hv_vmbus_device_id *);
|
||||
int (*remove)(struct hv_device *);
|
||||
void (*shutdown)(struct hv_device *);
|
||||
|
@ -1447,6 +1453,7 @@ void hv_event_tasklet_enable(struct vmbus_channel *channel);
|
|||
|
||||
void hv_process_channel_removal(struct vmbus_channel *channel, u32 relid);
|
||||
|
||||
void vmbus_setevent(struct vmbus_channel *channel);
|
||||
/*
|
||||
* Negotiated version with the Host.
|
||||
*/
|
||||
|
@ -1479,10 +1486,11 @@ hv_get_ring_buffer(struct hv_ring_buffer_info *ring_info)
|
|||
* there is room for the producer to send the pending packet.
|
||||
*/
|
||||
|
||||
static inline bool hv_need_to_signal_on_read(struct hv_ring_buffer_info *rbi)
|
||||
static inline void hv_signal_on_read(struct vmbus_channel *channel)
|
||||
{
|
||||
u32 cur_write_sz;
|
||||
u32 pending_sz;
|
||||
struct hv_ring_buffer_info *rbi = &channel->inbound;
|
||||
|
||||
/*
|
||||
* Issue a full memory barrier before making the signaling decision.
|
||||
|
@ -1500,14 +1508,14 @@ static inline bool hv_need_to_signal_on_read(struct hv_ring_buffer_info *rbi)
|
|||
pending_sz = READ_ONCE(rbi->ring_buffer->pending_send_sz);
|
||||
/* If the other end is not blocked on write don't bother. */
|
||||
if (pending_sz == 0)
|
||||
return false;
|
||||
return;
|
||||
|
||||
cur_write_sz = hv_get_bytes_to_write(rbi);
|
||||
|
||||
if (cur_write_sz >= pending_sz)
|
||||
return true;
|
||||
vmbus_setevent(channel);
|
||||
|
||||
return false;
|
||||
return;
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -1519,31 +1527,23 @@ static inline struct vmpacket_descriptor *
|
|||
get_next_pkt_raw(struct vmbus_channel *channel)
|
||||
{
|
||||
struct hv_ring_buffer_info *ring_info = &channel->inbound;
|
||||
u32 read_loc = ring_info->priv_read_index;
|
||||
u32 priv_read_loc = ring_info->priv_read_index;
|
||||
void *ring_buffer = hv_get_ring_buffer(ring_info);
|
||||
struct vmpacket_descriptor *cur_desc;
|
||||
u32 packetlen;
|
||||
u32 dsize = ring_info->ring_datasize;
|
||||
u32 delta = read_loc - ring_info->ring_buffer->read_index;
|
||||
/*
|
||||
* delta is the difference between what is available to read and
|
||||
* what was already consumed in place. We commit read index after
|
||||
* the whole batch is processed.
|
||||
*/
|
||||
u32 delta = priv_read_loc >= ring_info->ring_buffer->read_index ?
|
||||
priv_read_loc - ring_info->ring_buffer->read_index :
|
||||
(dsize - ring_info->ring_buffer->read_index) + priv_read_loc;
|
||||
u32 bytes_avail_toread = (hv_get_bytes_to_read(ring_info) - delta);
|
||||
|
||||
if (bytes_avail_toread < sizeof(struct vmpacket_descriptor))
|
||||
return NULL;
|
||||
|
||||
if ((read_loc + sizeof(*cur_desc)) > dsize)
|
||||
return NULL;
|
||||
|
||||
cur_desc = ring_buffer + read_loc;
|
||||
packetlen = cur_desc->len8 << 3;
|
||||
|
||||
/*
|
||||
* If the packet under consideration is wrapping around,
|
||||
* return failure.
|
||||
*/
|
||||
if ((read_loc + packetlen + VMBUS_PKT_TRAILER) > (dsize - 1))
|
||||
return NULL;
|
||||
|
||||
return cur_desc;
|
||||
return ring_buffer + priv_read_loc;
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -1555,16 +1555,14 @@ static inline void put_pkt_raw(struct vmbus_channel *channel,
|
|||
struct vmpacket_descriptor *desc)
|
||||
{
|
||||
struct hv_ring_buffer_info *ring_info = &channel->inbound;
|
||||
u32 read_loc = ring_info->priv_read_index;
|
||||
u32 packetlen = desc->len8 << 3;
|
||||
u32 dsize = ring_info->ring_datasize;
|
||||
|
||||
if ((read_loc + packetlen + VMBUS_PKT_TRAILER) > dsize)
|
||||
BUG();
|
||||
/*
|
||||
* Include the packet trailer.
|
||||
*/
|
||||
ring_info->priv_read_index += packetlen + VMBUS_PKT_TRAILER;
|
||||
ring_info->priv_read_index %= dsize;
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -1589,8 +1587,7 @@ static inline void commit_rd_index(struct vmbus_channel *channel)
|
|||
virt_rmb();
|
||||
ring_info->ring_buffer->read_index = ring_info->priv_read_index;
|
||||
|
||||
if (hv_need_to_signal_on_read(ring_info))
|
||||
vmbus_set_event(channel);
|
||||
hv_signal_on_read(channel);
|
||||
}
|
||||
|
||||
|
||||
|
|
|
@ -8,8 +8,7 @@
|
|||
struct mei_cl_device;
|
||||
struct mei_device;
|
||||
|
||||
typedef void (*mei_cldev_event_cb_t)(struct mei_cl_device *cldev,
|
||||
u32 events, void *context);
|
||||
typedef void (*mei_cldev_cb_t)(struct mei_cl_device *cldev);
|
||||
|
||||
/**
|
||||
* struct mei_cl_device - MEI device handle
|
||||
|
@ -24,12 +23,12 @@ typedef void (*mei_cldev_event_cb_t)(struct mei_cl_device *cldev,
|
|||
* @me_cl: me client
|
||||
* @cl: mei client
|
||||
* @name: device name
|
||||
* @event_work: async work to execute event callback
|
||||
* @event_cb: Drivers register this callback to get asynchronous ME
|
||||
* events (e.g. Rx buffer pending) notifications.
|
||||
* @event_context: event callback run context
|
||||
* @events_mask: Events bit mask requested by driver.
|
||||
* @events: Events bitmask sent to the driver.
|
||||
* @rx_work: async work to execute Rx event callback
|
||||
* @rx_cb: Drivers register this callback to get asynchronous ME
|
||||
* Rx buffer pending notifications.
|
||||
* @notif_work: async work to execute FW notif event callback
|
||||
* @notif_cb: Drivers register this callback to get asynchronous ME
|
||||
* FW notification pending notifications.
|
||||
*
|
||||
* @do_match: wheather device can be matched with a driver
|
||||
* @is_added: device is already scanned
|
||||
|
@ -44,11 +43,10 @@ struct mei_cl_device {
|
|||
struct mei_cl *cl;
|
||||
char name[MEI_CL_NAME_SIZE];
|
||||
|
||||
struct work_struct event_work;
|
||||
mei_cldev_event_cb_t event_cb;
|
||||
void *event_context;
|
||||
unsigned long events_mask;
|
||||
unsigned long events;
|
||||
struct work_struct rx_work;
|
||||
mei_cldev_cb_t rx_cb;
|
||||
struct work_struct notif_work;
|
||||
mei_cldev_cb_t notif_cb;
|
||||
|
||||
unsigned int do_match:1;
|
||||
unsigned int is_added:1;
|
||||
|
@ -74,16 +72,27 @@ int __mei_cldev_driver_register(struct mei_cl_driver *cldrv,
|
|||
|
||||
void mei_cldev_driver_unregister(struct mei_cl_driver *cldrv);
|
||||
|
||||
/**
|
||||
* module_mei_cl_driver - Helper macro for registering mei cl driver
|
||||
*
|
||||
* @__mei_cldrv: mei_cl_driver structure
|
||||
*
|
||||
* Helper macro for mei cl drivers which do not do anything special in module
|
||||
* init/exit, for eliminating a boilerplate code.
|
||||
*/
|
||||
#define module_mei_cl_driver(__mei_cldrv) \
|
||||
module_driver(__mei_cldrv, \
|
||||
mei_cldev_driver_register,\
|
||||
mei_cldev_driver_unregister)
|
||||
|
||||
ssize_t mei_cldev_send(struct mei_cl_device *cldev, u8 *buf, size_t length);
|
||||
ssize_t mei_cldev_recv(struct mei_cl_device *cldev, u8 *buf, size_t length);
|
||||
ssize_t mei_cldev_recv(struct mei_cl_device *cldev, u8 *buf, size_t length);
|
||||
ssize_t mei_cldev_recv_nonblock(struct mei_cl_device *cldev, u8 *buf,
|
||||
size_t length);
|
||||
|
||||
int mei_cldev_register_event_cb(struct mei_cl_device *cldev,
|
||||
unsigned long event_mask,
|
||||
mei_cldev_event_cb_t read_cb, void *context);
|
||||
|
||||
#define MEI_CL_EVENT_RX 0
|
||||
#define MEI_CL_EVENT_TX 1
|
||||
#define MEI_CL_EVENT_NOTIF 2
|
||||
int mei_cldev_register_rx_cb(struct mei_cl_device *cldev, mei_cldev_cb_t rx_cb);
|
||||
int mei_cldev_register_notif_cb(struct mei_cl_device *cldev,
|
||||
mei_cldev_cb_t notif_cb);
|
||||
|
||||
const uuid_le *mei_cldev_uuid(const struct mei_cl_device *cldev);
|
||||
u8 mei_cldev_ver(const struct mei_cl_device *cldev);
|
||||
|
|
|
@ -71,6 +71,13 @@ struct miscdevice {
|
|||
extern int misc_register(struct miscdevice *misc);
|
||||
extern void misc_deregister(struct miscdevice *misc);
|
||||
|
||||
/*
|
||||
* Helper macro for drivers that don't do anything special in the initcall.
|
||||
* This helps in eleminating of boilerplate code.
|
||||
*/
|
||||
#define builtin_misc_device(__misc_device) \
|
||||
builtin_driver(__misc_device, misc_register)
|
||||
|
||||
/*
|
||||
* Helper macro for drivers that don't do anything special in module init / exit
|
||||
* call. This helps in eleminating of boilerplate code.
|
||||
|
|
|
@ -1266,6 +1266,18 @@ static inline bool of_device_is_system_power_controller(const struct device_node
|
|||
* Overlay support
|
||||
*/
|
||||
|
||||
enum of_overlay_notify_action {
|
||||
OF_OVERLAY_PRE_APPLY,
|
||||
OF_OVERLAY_POST_APPLY,
|
||||
OF_OVERLAY_PRE_REMOVE,
|
||||
OF_OVERLAY_POST_REMOVE,
|
||||
};
|
||||
|
||||
struct of_overlay_notify_data {
|
||||
struct device_node *overlay;
|
||||
struct device_node *target;
|
||||
};
|
||||
|
||||
#ifdef CONFIG_OF_OVERLAY
|
||||
|
||||
/* ID based overlays; the API for external users */
|
||||
|
@ -1273,6 +1285,9 @@ int of_overlay_create(struct device_node *tree);
|
|||
int of_overlay_destroy(int id);
|
||||
int of_overlay_destroy_all(void);
|
||||
|
||||
int of_overlay_notifier_register(struct notifier_block *nb);
|
||||
int of_overlay_notifier_unregister(struct notifier_block *nb);
|
||||
|
||||
#else
|
||||
|
||||
static inline int of_overlay_create(struct device_node *tree)
|
||||
|
@ -1290,6 +1305,16 @@ static inline int of_overlay_destroy_all(void)
|
|||
return -ENOTSUPP;
|
||||
}
|
||||
|
||||
static inline int of_overlay_notifier_register(struct notifier_block *nb)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline int of_overlay_notifier_unregister(struct notifier_block *nb)
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
#endif
|
||||
|
||||
#endif /* _LINUX_OF_H */
|
||||
|
|
|
@ -113,7 +113,6 @@ struct vme_driver {
|
|||
int (*match)(struct vme_dev *);
|
||||
int (*probe)(struct vme_dev *);
|
||||
int (*remove)(struct vme_dev *);
|
||||
void (*shutdown)(void);
|
||||
struct device_driver driver;
|
||||
struct list_head devices;
|
||||
};
|
||||
|
|
|
@ -4002,6 +4002,7 @@ int access_process_vm(struct task_struct *tsk, unsigned long addr,
|
|||
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(access_process_vm);
|
||||
|
||||
/*
|
||||
* Print the name of a VMA.
|
||||
|
|
Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше
Загрузка…
Ссылка в новой задаче