Merge branch 'drm-next' of git://people.freedesktop.org/~airlied/linux

Pull drm updates from Dave Airlie:
 "Highlights:

   - AMD KFD driver merge

     This is the AMD HSA interface for exposing a lowlevel interface for
     GPGPU use.  They have an open source userspace built on top of this
     interface, and the code looks as good as it was going to get out of
     tree.

   - Initial atomic modesetting work

     The need for an atomic modesetting interface to allow userspace to
     try and send a complete set of modesetting state to the driver has
     arisen, and been suffering from neglect this past year.  No more,
     the start of the common code and changes for msm driver to use it
     are in this tree.  Ongoing work to get the userspace ioctl finished
     and the code clean will probably wait until next kernel.

   - DisplayID 1.3 and tiled monitor exposed to userspace.

     Tiled monitor property is now exposed for userspace to make use of.

   - Rockchip drm driver merged.

   - imx gpu driver moved out of staging

  Other stuff:

   - core:
        panel - MIPI DSI + new panels.
        expose suggested x/y properties for virtual GPUs

   - i915:
        Initial Skylake (SKL) support
        gen3/4 reset work
        start of dri1/ums removal
        infoframe tracking
        fixes for lots of things.

   - nouveau:
        tegra k1 voltage support
        GM204 modesetting support
        GT21x memory reclocking work

   - radeon:
        CI dpm fixes
        GPUVM improvements
        Initial DPM fan control

   - rcar-du:
        HDMI support added
        removed some support for old boards
        slave encoder driver for Analog Devices adv7511

   - exynos:
        Exynos4415 SoC support

   - msm:
        a4xx gpu support
        atomic helper conversion

   - tegra:
        iommu support
        universal plane support
        ganged-mode DSI support

   - sti:
        HDMI i2c improvements

   - vmwgfx:
        some late fixes.

   - qxl:
        use suggested x/y properties"

* 'drm-next' of git://people.freedesktop.org/~airlied/linux: (969 commits)
  drm: sti: fix module compilation issue
  drm/i915: save/restore GMBUS freq across suspend/resume on gen4
  drm: sti: correctly cleanup CRTC and planes
  drm: sti: add HQVDP plane
  drm: sti: add cursor plane
  drm: sti: enable auxiliary CRTC
  drm: sti: fix delay in VTG programming
  drm: sti: prepare sti_tvout to support auxiliary crtc
  drm: sti: use drm_crtc_vblank_{on/off} instead of drm_vblank_{on/off}
  drm: sti: fix hdmi avi infoframe
  drm: sti: remove event lock while disabling vblank
  drm: sti: simplify gdp code
  drm: sti: clear all mixer control
  drm: sti: remove gpio for HDMI hot plug detection
  drm: sti: allow to change hdmi ddc i2c adapter
  drm/doc: Document drm_add_modes_noedid() usage
  drm/i915: Remove '& 0xffff' from the mask given to WA_REG()
  drm/i915: Invert the mask and val arguments in wa_add() and WA_REG()
  drm: Zero out DRM object memory upon cleanup
  drm/i915/bdw: Fix the write setting up the WIZ hashing mode
  ...
This commit is contained in:
Linus Torvalds 2014-12-15 15:52:01 -08:00
Родитель 26178ec11e 4e0cd68115
Коммит 988adfdffd
549 изменённых файлов: 53809 добавлений и 14944 удалений

Просмотреть файл

@ -1197,6 +1197,13 @@ S: R. Tocantins, 89 - Cristo Rei
S: 80050-430 - Curitiba - Paraná
S: Brazil
N: Oded Gabbay
E: oded.gabbay@gmail.com
D: AMD KFD maintainer
S: 12 Shraga Raphaeli
S: Petah-Tikva, 4906418
S: Israel
N: Kumar Gala
E: galak@kernel.crashing.org
D: Embedded PowerPC 6xx/7xx/74xx/82xx/83xx/85xx support

Просмотреть файл

@ -492,10 +492,10 @@ char *date;</synopsis>
<sect2>
<title>The Translation Table Manager (TTM)</title>
<para>
TTM design background and information belongs here.
TTM design background and information belongs here.
</para>
<sect3>
<title>TTM initialization</title>
<title>TTM initialization</title>
<warning><para>This section is outdated.</para></warning>
<para>
Drivers wishing to support TTM must fill out a drm_bo_driver
@ -503,42 +503,42 @@ char *date;</synopsis>
pointers for initializing the TTM, allocating and freeing memory,
waiting for command completion and fence synchronization, and memory
migration. See the radeon_ttm.c file for an example of usage.
</para>
<para>
The ttm_global_reference structure is made up of several fields:
</para>
<programlisting>
struct ttm_global_reference {
enum ttm_global_types global_type;
size_t size;
void *object;
int (*init) (struct ttm_global_reference *);
void (*release) (struct ttm_global_reference *);
};
</programlisting>
<para>
There should be one global reference structure for your memory
manager as a whole, and there will be others for each object
created by the memory manager at runtime. Your global TTM should
have a type of TTM_GLOBAL_TTM_MEM. The size field for the global
object should be sizeof(struct ttm_mem_global), and the init and
release hooks should point at your driver-specific init and
release routines, which probably eventually call
ttm_mem_global_init and ttm_mem_global_release, respectively.
</para>
<para>
Once your global TTM accounting structure is set up and initialized
by calling ttm_global_item_ref() on it,
you need to create a buffer object TTM to
provide a pool for buffer object allocation by clients and the
kernel itself. The type of this object should be TTM_GLOBAL_TTM_BO,
and its size should be sizeof(struct ttm_bo_global). Again,
driver-specific init and release functions may be provided,
likely eventually calling ttm_bo_global_init() and
ttm_bo_global_release(), respectively. Also, like the previous
object, ttm_global_item_ref() is used to create an initial reference
count for the TTM, which will call your initialization function.
</para>
</para>
<para>
The ttm_global_reference structure is made up of several fields:
</para>
<programlisting>
struct ttm_global_reference {
enum ttm_global_types global_type;
size_t size;
void *object;
int (*init) (struct ttm_global_reference *);
void (*release) (struct ttm_global_reference *);
};
</programlisting>
<para>
There should be one global reference structure for your memory
manager as a whole, and there will be others for each object
created by the memory manager at runtime. Your global TTM should
have a type of TTM_GLOBAL_TTM_MEM. The size field for the global
object should be sizeof(struct ttm_mem_global), and the init and
release hooks should point at your driver-specific init and
release routines, which probably eventually call
ttm_mem_global_init and ttm_mem_global_release, respectively.
</para>
<para>
Once your global TTM accounting structure is set up and initialized
by calling ttm_global_item_ref() on it,
you need to create a buffer object TTM to
provide a pool for buffer object allocation by clients and the
kernel itself. The type of this object should be TTM_GLOBAL_TTM_BO,
and its size should be sizeof(struct ttm_bo_global). Again,
driver-specific init and release functions may be provided,
likely eventually calling ttm_bo_global_init() and
ttm_bo_global_release(), respectively. Also, like the previous
object, ttm_global_item_ref() is used to create an initial reference
count for the TTM, which will call your initialization function.
</para>
</sect3>
</sect2>
<sect2 id="drm-gem">
@ -566,19 +566,19 @@ char *date;</synopsis>
using driver-specific ioctls.
</para>
<para>
On a fundamental level, GEM involves several operations:
<itemizedlist>
<listitem>Memory allocation and freeing</listitem>
<listitem>Command execution</listitem>
<listitem>Aperture management at command execution time</listitem>
</itemizedlist>
Buffer object allocation is relatively straightforward and largely
On a fundamental level, GEM involves several operations:
<itemizedlist>
<listitem>Memory allocation and freeing</listitem>
<listitem>Command execution</listitem>
<listitem>Aperture management at command execution time</listitem>
</itemizedlist>
Buffer object allocation is relatively straightforward and largely
provided by Linux's shmem layer, which provides memory to back each
object.
</para>
<para>
Device-specific operations, such as command execution, pinning, buffer
read &amp; write, mapping, and domain ownership transfers are left to
read &amp; write, mapping, and domain ownership transfers are left to
driver-specific ioctls.
</para>
<sect3>
@ -738,16 +738,16 @@ char *date;</synopsis>
respectively. The conversion is handled by the DRM core without any
driver-specific support.
</para>
<para>
GEM also supports buffer sharing with dma-buf file descriptors through
PRIME. GEM-based drivers must use the provided helpers functions to
implement the exporting and importing correctly. See <xref linkend="drm-prime-support" />.
Since sharing file descriptors is inherently more secure than the
easily guessable and global GEM names it is the preferred buffer
sharing mechanism. Sharing buffers through GEM names is only supported
for legacy userspace. Furthermore PRIME also allows cross-device
buffer sharing since it is based on dma-bufs.
</para>
<para>
GEM also supports buffer sharing with dma-buf file descriptors through
PRIME. GEM-based drivers must use the provided helpers functions to
implement the exporting and importing correctly. See <xref linkend="drm-prime-support" />.
Since sharing file descriptors is inherently more secure than the
easily guessable and global GEM names it is the preferred buffer
sharing mechanism. Sharing buffers through GEM names is only supported
for legacy userspace. Furthermore PRIME also allows cross-device
buffer sharing since it is based on dma-bufs.
</para>
</sect3>
<sect3 id="drm-gem-objects-mapping">
<title>GEM Objects Mapping</title>
@ -852,7 +852,7 @@ char *date;</synopsis>
<sect3>
<title>Command Execution</title>
<para>
Perhaps the most important GEM function for GPU devices is providing a
Perhaps the most important GEM function for GPU devices is providing a
command execution interface to clients. Client programs construct
command buffers containing references to previously allocated memory
objects, and then submit them to GEM. At that point, GEM takes care to
@ -874,95 +874,101 @@ char *date;</synopsis>
<title>GEM Function Reference</title>
!Edrivers/gpu/drm/drm_gem.c
</sect3>
</sect2>
<sect2>
<title>VMA Offset Manager</title>
</sect2>
<sect2>
<title>VMA Offset Manager</title>
!Pdrivers/gpu/drm/drm_vma_manager.c vma offset manager
!Edrivers/gpu/drm/drm_vma_manager.c
!Iinclude/drm/drm_vma_manager.h
</sect2>
<sect2 id="drm-prime-support">
<title>PRIME Buffer Sharing</title>
<para>
PRIME is the cross device buffer sharing framework in drm, originally
created for the OPTIMUS range of multi-gpu platforms. To userspace
PRIME buffers are dma-buf based file descriptors.
</para>
<sect3>
<title>Overview and Driver Interface</title>
<para>
Similar to GEM global names, PRIME file descriptors are
also used to share buffer objects across processes. They offer
additional security: as file descriptors must be explicitly sent over
UNIX domain sockets to be shared between applications, they can't be
guessed like the globally unique GEM names.
</para>
<para>
Drivers that support the PRIME
API must set the DRIVER_PRIME bit in the struct
<structname>drm_driver</structname>
<structfield>driver_features</structfield> field, and implement the
<methodname>prime_handle_to_fd</methodname> and
<methodname>prime_fd_to_handle</methodname> operations.
</para>
<para>
<synopsis>int (*prime_handle_to_fd)(struct drm_device *dev,
struct drm_file *file_priv, uint32_t handle,
uint32_t flags, int *prime_fd);
</sect2>
<sect2 id="drm-prime-support">
<title>PRIME Buffer Sharing</title>
<para>
PRIME is the cross device buffer sharing framework in drm, originally
created for the OPTIMUS range of multi-gpu platforms. To userspace
PRIME buffers are dma-buf based file descriptors.
</para>
<sect3>
<title>Overview and Driver Interface</title>
<para>
Similar to GEM global names, PRIME file descriptors are
also used to share buffer objects across processes. They offer
additional security: as file descriptors must be explicitly sent over
UNIX domain sockets to be shared between applications, they can't be
guessed like the globally unique GEM names.
</para>
<para>
Drivers that support the PRIME
API must set the DRIVER_PRIME bit in the struct
<structname>drm_driver</structname>
<structfield>driver_features</structfield> field, and implement the
<methodname>prime_handle_to_fd</methodname> and
<methodname>prime_fd_to_handle</methodname> operations.
</para>
<para>
<synopsis>int (*prime_handle_to_fd)(struct drm_device *dev,
struct drm_file *file_priv, uint32_t handle,
uint32_t flags, int *prime_fd);
int (*prime_fd_to_handle)(struct drm_device *dev,
struct drm_file *file_priv, int prime_fd,
uint32_t *handle);</synopsis>
Those two operations convert a handle to a PRIME file descriptor and
vice versa. Drivers must use the kernel dma-buf buffer sharing framework
to manage the PRIME file descriptors. Similar to the mode setting
API PRIME is agnostic to the underlying buffer object manager, as
long as handles are 32bit unsigned integers.
</para>
<para>
While non-GEM drivers must implement the operations themselves, GEM
drivers must use the <function>drm_gem_prime_handle_to_fd</function>
and <function>drm_gem_prime_fd_to_handle</function> helper functions.
Those helpers rely on the driver
<methodname>gem_prime_export</methodname> and
<methodname>gem_prime_import</methodname> operations to create a dma-buf
instance from a GEM object (dma-buf exporter role) and to create a GEM
object from a dma-buf instance (dma-buf importer role).
</para>
<para>
<synopsis>struct dma_buf * (*gem_prime_export)(struct drm_device *dev,
struct drm_gem_object *obj,
int flags);
struct drm_file *file_priv, int prime_fd,
uint32_t *handle);</synopsis>
Those two operations convert a handle to a PRIME file descriptor and
vice versa. Drivers must use the kernel dma-buf buffer sharing framework
to manage the PRIME file descriptors. Similar to the mode setting
API PRIME is agnostic to the underlying buffer object manager, as
long as handles are 32bit unsigned integers.
</para>
<para>
While non-GEM drivers must implement the operations themselves, GEM
drivers must use the <function>drm_gem_prime_handle_to_fd</function>
and <function>drm_gem_prime_fd_to_handle</function> helper functions.
Those helpers rely on the driver
<methodname>gem_prime_export</methodname> and
<methodname>gem_prime_import</methodname> operations to create a dma-buf
instance from a GEM object (dma-buf exporter role) and to create a GEM
object from a dma-buf instance (dma-buf importer role).
</para>
<para>
<synopsis>struct dma_buf * (*gem_prime_export)(struct drm_device *dev,
struct drm_gem_object *obj,
int flags);
struct drm_gem_object * (*gem_prime_import)(struct drm_device *dev,
struct dma_buf *dma_buf);</synopsis>
These two operations are mandatory for GEM drivers that support
PRIME.
</para>
</sect3>
<sect3>
<title>PRIME Helper Functions</title>
!Pdrivers/gpu/drm/drm_prime.c PRIME Helpers
struct dma_buf *dma_buf);</synopsis>
These two operations are mandatory for GEM drivers that support
PRIME.
</para>
</sect3>
</sect2>
<sect2>
<title>PRIME Function References</title>
<sect3>
<title>PRIME Helper Functions</title>
!Pdrivers/gpu/drm/drm_prime.c PRIME Helpers
</sect3>
</sect2>
<sect2>
<title>PRIME Function References</title>
!Edrivers/gpu/drm/drm_prime.c
</sect2>
<sect2>
<title>DRM MM Range Allocator</title>
<sect3>
<title>Overview</title>
</sect2>
<sect2>
<title>DRM MM Range Allocator</title>
<sect3>
<title>Overview</title>
!Pdrivers/gpu/drm/drm_mm.c Overview
</sect3>
<sect3>
<title>LRU Scan/Eviction Support</title>
</sect3>
<sect3>
<title>LRU Scan/Eviction Support</title>
!Pdrivers/gpu/drm/drm_mm.c lru scan roaster
</sect3>
</sect3>
</sect2>
<sect2>
<title>DRM MM Range Allocator Function References</title>
<sect2>
<title>DRM MM Range Allocator Function References</title>
!Edrivers/gpu/drm/drm_mm.c
!Iinclude/drm/drm_mm.h
</sect2>
</sect2>
<sect2>
<title>CMA Helper Functions Reference</title>
!Pdrivers/gpu/drm/drm_gem_cma_helper.c cma helpers
!Edrivers/gpu/drm/drm_gem_cma_helper.c
!Iinclude/drm/drm_gem_cma_helper.h
</sect2>
</sect1>
<!-- Internals: mode setting -->
@ -994,6 +1000,10 @@ int max_width, max_height;</synopsis>
<title>Display Modes Function Reference</title>
!Iinclude/drm/drm_modes.h
!Edrivers/gpu/drm/drm_modes.c
</sect2>
<sect2>
<title>Atomic Mode Setting Function Reference</title>
!Edrivers/gpu/drm/drm_atomic.c
</sect2>
<sect2>
<title>Frame Buffer Creation</title>
@ -1825,6 +1835,10 @@ void intel_crt_init(struct drm_device *dev)
<sect2>
<title>KMS API Functions</title>
!Edrivers/gpu/drm/drm_crtc.c
</sect2>
<sect2>
<title>KMS Data Structures</title>
!Iinclude/drm/drm_crtc.h
</sect2>
<sect2>
<title>KMS Locking</title>
@ -1933,10 +1947,16 @@ void intel_crt_init(struct drm_device *dev)
and then retrieves a list of modes by calling the connector
<methodname>get_modes</methodname> helper operation.
</para>
<para>
If the helper operation returns no mode, and if the connector status
is connector_status_connected, standard VESA DMT modes up to
1024x768 are automatically added to the modes list by a call to
<function>drm_add_modes_noedid</function>.
</para>
<para>
The function filters out modes larger than
The function then filters out modes larger than
<parameter>max_width</parameter> and <parameter>max_height</parameter>
if specified. It then calls the optional connector
if specified. It finally calls the optional connector
<methodname>mode_valid</methodname> helper operation for each mode in
the probed list to check whether the mode is valid for the connector.
</para>
@ -2076,11 +2096,19 @@ void intel_crt_init(struct drm_device *dev)
<synopsis>int (*get_modes)(struct drm_connector *connector);</synopsis>
<para>
Fill the connector's <structfield>probed_modes</structfield> list
by parsing EDID data with <function>drm_add_edid_modes</function> or
calling <function>drm_mode_probed_add</function> directly for every
by parsing EDID data with <function>drm_add_edid_modes</function>,
adding standard VESA DMT modes with <function>drm_add_modes_noedid</function>,
or calling <function>drm_mode_probed_add</function> directly for every
supported mode and return the number of modes it has detected. This
operation is mandatory.
</para>
<para>
Note that the caller function will automatically add standard VESA
DMT modes up to 1024x768 if the <methodname>get_modes</methodname>
helper operation returns no mode and if the connector status is
connector_status_connected. There is no need to call
<function>drm_add_edid_modes</function> manually in that case.
</para>
<para>
When adding modes manually the driver creates each mode with a call to
<function>drm_mode_create</function> and must fill the following fields.
@ -2278,7 +2306,7 @@ void intel_crt_init(struct drm_device *dev)
<function>drm_helper_probe_single_connector_modes</function>.
</para>
<para>
When parsing EDID data, <function>drm_add_edid_modes</function> fill the
When parsing EDID data, <function>drm_add_edid_modes</function> fills the
connector <structfield>display_info</structfield>
<structfield>width_mm</structfield> and
<structfield>height_mm</structfield> fields. When creating modes
@ -2315,9 +2343,27 @@ void intel_crt_init(struct drm_device *dev)
</listitem>
</itemizedlist>
</sect2>
<sect2>
<title>Atomic Modeset Helper Functions Reference</title>
<sect3>
<title>Overview</title>
!Pdrivers/gpu/drm/drm_atomic_helper.c overview
</sect3>
<sect3>
<title>Implementing Asynchronous Atomic Commit</title>
!Pdrivers/gpu/drm/drm_atomic_helper.c implementing async commit
</sect3>
<sect3>
<title>Atomic State Reset and Initialization</title>
!Pdrivers/gpu/drm/drm_atomic_helper.c atomic state reset and initialization
</sect3>
!Iinclude/drm/drm_atomic_helper.h
!Edrivers/gpu/drm/drm_atomic_helper.c
</sect2>
<sect2>
<title>Modeset Helper Functions Reference</title>
!Edrivers/gpu/drm/drm_crtc_helper.c
!Pdrivers/gpu/drm/drm_crtc_helper.c overview
</sect2>
<sect2>
<title>Output Probing Helper Functions Reference</title>
@ -2341,6 +2387,12 @@ void intel_crt_init(struct drm_device *dev)
!Pdrivers/gpu/drm/drm_dp_mst_topology.c dp mst helper
!Iinclude/drm/drm_dp_mst_helper.h
!Edrivers/gpu/drm/drm_dp_mst_topology.c
</sect2>
<sect2>
<title>MIPI DSI Helper Functions Reference</title>
!Pdrivers/gpu/drm/drm_mipi_dsi.c dsi helpers
!Iinclude/drm/drm_mipi_dsi.h
!Edrivers/gpu/drm/drm_mipi_dsi.c
</sect2>
<sect2>
<title>EDID Helper Functions Reference</title>
@ -2371,7 +2423,12 @@ void intel_crt_init(struct drm_device *dev)
</sect2>
<sect2>
<title id="drm-kms-planehelpers">Plane Helper Reference</title>
!Edrivers/gpu/drm/drm_plane_helper.c Plane Helpers
!Edrivers/gpu/drm/drm_plane_helper.c
!Pdrivers/gpu/drm/drm_plane_helper.c overview
</sect2>
<sect2>
<title>Tile group</title>
!Pdrivers/gpu/drm/drm_crtc.c Tile group
</sect2>
</sect1>
@ -2507,8 +2564,8 @@ void intel_crt_init(struct drm_device *dev)
<td valign="top" >Description/Restrictions</td>
</tr>
<tr>
<td rowspan="21" valign="top" >DRM</td>
<td rowspan="2" valign="top" >Generic</td>
<td rowspan="25" valign="top" >DRM</td>
<td rowspan="4" valign="top" >Generic</td>
<td valign="top" >“EDID”</td>
<td valign="top" >BLOB | IMMUTABLE</td>
<td valign="top" >0</td>
@ -2523,6 +2580,20 @@ void intel_crt_init(struct drm_device *dev)
<td valign="top" >Contains DPMS operation mode value.</td>
</tr>
<tr>
<td valign="top" >“PATH”</td>
<td valign="top" >BLOB | IMMUTABLE</td>
<td valign="top" >0</td>
<td valign="top" >Connector</td>
<td valign="top" >Contains topology path to a connector.</td>
</tr>
<tr>
<td valign="top" >“TILE”</td>
<td valign="top" >BLOB | IMMUTABLE</td>
<td valign="top" >0</td>
<td valign="top" >Connector</td>
<td valign="top" >Contains tiling information for a connector.</td>
</tr>
<tr>
<td rowspan="1" valign="top" >Plane</td>
<td valign="top" >“type”</td>
<td valign="top" >ENUM | IMMUTABLE</td>
@ -2638,6 +2709,21 @@ void intel_crt_init(struct drm_device *dev)
<td valign="top" >TBD</td>
</tr>
<tr>
<td rowspan="2" valign="top" >Virtual GPU</td>
<td valign="top" >“suggested X”</td>
<td valign="top" >RANGE</td>
<td valign="top" >Min=0, Max=0xffffffff</td>
<td valign="top" >Connector</td>
<td valign="top" >property to suggest an X offset for a connector</td>
</tr>
<tr>
<td valign="top" >“suggested Y”</td>
<td valign="top" >RANGE</td>
<td valign="top" >Min=0, Max=0xffffffff</td>
<td valign="top" >Connector</td>
<td valign="top" >property to suggest an Y offset for a connector</td>
</tr>
<tr>
<td rowspan="3" valign="top" >Optional</td>
<td valign="top" >“scaling mode”</td>
<td valign="top" >ENUM</td>
@ -3787,6 +3873,26 @@ int num_ioctls;</synopsis>
blocks. This excludes a set of SoC platforms with an SGX rendering unit,
those have basic support through the gma500 drm driver.
</para>
<sect1>
<title>Core Driver Infrastructure</title>
<para>
This section covers core driver infrastructure used by both the display
and the GEM parts of the driver.
</para>
<sect2>
<title>Runtime Power Management</title>
!Pdrivers/gpu/drm/i915/intel_runtime_pm.c runtime pm
!Idrivers/gpu/drm/i915/intel_runtime_pm.c
</sect2>
<sect2>
<title>Interrupt Handling</title>
!Pdrivers/gpu/drm/i915/i915_irq.c interrupt handling
!Fdrivers/gpu/drm/i915/i915_irq.c intel_irq_init intel_irq_init_hw intel_hpd_init
!Fdrivers/gpu/drm/i915/i915_irq.c intel_irq_fini
!Fdrivers/gpu/drm/i915/i915_irq.c intel_runtime_pm_disable_interrupts
!Fdrivers/gpu/drm/i915/i915_irq.c intel_runtime_pm_enable_interrupts
</sect2>
</sect1>
<sect1>
<title>Display Hardware Handling</title>
<para>
@ -3803,6 +3909,18 @@ int num_ioctls;</synopsis>
configuration change.
</para>
</sect2>
<sect2>
<title>Frontbuffer Tracking</title>
!Pdrivers/gpu/drm/i915/intel_frontbuffer.c frontbuffer tracking
!Idrivers/gpu/drm/i915/intel_frontbuffer.c
!Fdrivers/gpu/drm/i915/intel_drv.h intel_frontbuffer_flip
!Fdrivers/gpu/drm/i915/i915_gem.c i915_gem_track_fb
</sect2>
<sect2>
<title>Display FIFO Underrun Reporting</title>
!Pdrivers/gpu/drm/i915/intel_fifo_underrun.c fifo underrun handling
!Idrivers/gpu/drm/i915/intel_fifo_underrun.c
</sect2>
<sect2>
<title>Plane Configuration</title>
<para>
@ -3822,6 +3940,16 @@ int num_ioctls;</synopsis>
probing, so those sections fully apply.
</para>
</sect2>
<sect2>
<title>High Definition Audio</title>
!Pdrivers/gpu/drm/i915/intel_audio.c High Definition Audio over HDMI and Display Port
!Idrivers/gpu/drm/i915/intel_audio.c
</sect2>
<sect2>
<title>Panel Self Refresh PSR (PSR/SRD)</title>
!Pdrivers/gpu/drm/i915/intel_psr.c Panel Self Refresh (PSR/SRD)
!Idrivers/gpu/drm/i915/intel_psr.c
</sect2>
<sect2>
<title>DPIO</title>
!Pdrivers/gpu/drm/i915/i915_reg.h DPIO
@ -3931,6 +4059,28 @@ int num_ioctls;</synopsis>
!Idrivers/gpu/drm/i915/intel_lrc.c
</sect2>
</sect1>
<sect1>
<title> Tracing </title>
<para>
This sections covers all things related to the tracepoints implemented in
the i915 driver.
</para>
<sect2>
<title> i915_ppgtt_create and i915_ppgtt_release </title>
!Pdrivers/gpu/drm/i915/i915_trace.h i915_ppgtt_create and i915_ppgtt_release tracepoints
</sect2>
<sect2>
<title> i915_context_create and i915_context_free </title>
!Pdrivers/gpu/drm/i915/i915_trace.h i915_context_create and i915_context_free tracepoints
</sect2>
<sect2>
<title> switch_mm </title>
!Pdrivers/gpu/drm/i915/i915_trace.h switch_mm tracepoint
</sect2>
</sect1>
</chapter>
!Cdrivers/gpu/drm/i915/i915_irq.c
</part>
</book>

Просмотреть файл

@ -191,6 +191,8 @@ of the following host1x client modules:
- nvidia,hpd-gpio: specifies a GPIO used for hotplug detection
- nvidia,edid: supplies a binary EDID blob
- nvidia,panel: phandle of a display panel
- nvidia,ganged-mode: contains a phandle to a second DSI controller to gang
up with in order to support up to 8 data lanes
- sor: serial output resource

Просмотреть файл

@ -68,7 +68,7 @@ STMicroelectronics stih4xx platforms
number of clocks may depend of the SoC type.
- clock-names: names of the clocks listed in clocks property in the same
order.
- hdmi,hpd-gpio: gpio id to detect if an hdmi cable is plugged or not.
- ddc: phandle of an I2C controller used for DDC EDID probing
sti-hda:
Required properties:
@ -83,6 +83,22 @@ sti-hda:
- clock-names: names of the clocks listed in clocks property in the same
order.
sti-hqvdp:
must be a child of sti-display-subsystem
Required properties:
- compatible: "st,stih<chip>-hqvdp"
- reg: Physical base address of the IP registers and length of memory mapped region.
- clocks: from common clock binding: handle hardware IP needed clocks, the
number of clocks may depend of the SoC type.
See ../clocks/clock-bindings.txt for details.
- clock-names: names of the clocks listed in clocks property in the same
order.
- resets: resets to be used by the device
See ../reset/reset.txt for details.
- reset-names: names of the resets listed in resets property in the same
order.
- st,vtg: phandle on vtg main device node.
Example:
/ {
@ -173,7 +189,6 @@ Example:
interrupt-names = "irq";
clock-names = "pix", "tmds", "phy", "audio";
clocks = <&clockgen_c_vcc CLK_S_PIX_HDMI>, <&clockgen_c_vcc CLK_S_TMDS_HDMI>, <&clockgen_c_vcc CLK_S_HDMI_REJECT_PLL>, <&clockgen_b1 CLK_S_PCM_0>;
hdmi,hpd-gpio = <&PIO2 5>;
};
sti-hda@fe85a000 {
@ -184,6 +199,16 @@ Example:
clocks = <&clockgen_c_vcc CLK_S_PIX_HD>, <&clockgen_c_vcc CLK_S_HDDAC>;
};
};
sti-hqvdp@9c000000 {
compatible = "st,stih407-hqvdp";
reg = <0x9C00000 0x100000>;
clock-names = "hqvdp", "pix_main";
clocks = <&clk_s_c0_flexgen CLK_MAIN_DISP>, <&clk_s_d2_flexgen CLK_PIX_MAIN_DISP>;
reset-names = "hqvdp";
resets = <&softreset STIH407_HDQVDP_SOFTRESET>;
st,vtg = <&vtg_main>;
};
};
...
};

Просмотреть файл

@ -0,0 +1,7 @@
AU Optronics Corporation 11.6" HD (1366x768) color TFT-LCD panel
Required properties:
- compatible: should be "auo,b116xw03"
This binding is compatible with the simple-panel binding, which is specified
in simple-panel.txt in this directory.

Просмотреть файл

@ -0,0 +1,7 @@
HannStar Display Corp. HSD070PWW1 7.0" WXGA TFT LCD panel
Required properties:
- compatible: should be "hannstar,hsd070pww1"
This binding is compatible with the simple-panel binding, which is specified
in simple-panel.txt in this directory.

Просмотреть файл

@ -0,0 +1,7 @@
Hitachi Ltd. Corporation 9" WVGA (800x480) TFT LCD panel
Required properties:
- compatible: should be "hit,tx23d38vm0caa"
This binding is compatible with the simple-panel binding, which is specified
in simple-panel.txt in this directory.

Просмотреть файл

@ -0,0 +1,7 @@
Innolux Corporation 12.1" WXGA (1280x800) TFT LCD panel
Required properties:
- compatible: should be "innolux,g121i1-l01"
This binding is compatible with the simple-panel binding, which is specified
in simple-panel.txt in this directory.

Просмотреть файл

@ -0,0 +1,49 @@
Sharp Microelectronics 10.1" WQXGA TFT LCD panel
This panel requires a dual-channel DSI host to operate. It supports two modes:
- left-right: each channel drives the left or right half of the screen
- even-odd: each channel drives the even or odd lines of the screen
Each of the DSI channels controls a separate DSI peripheral. The peripheral
driven by the first link (DSI-LINK1), left or even, is considered the primary
peripheral and controls the device. The 'link2' property contains a phandle
to the peripheral driven by the second link (DSI-LINK2, right or odd).
Note that in video mode the DSI-LINK1 interface always provides the left/even
pixels and DSI-LINK2 always provides the right/odd pixels. In command mode it
is possible to program either link to drive the left/even or right/odd pixels
but for the sake of consistency this binding assumes that the same assignment
is chosen as for video mode.
Required properties:
- compatible: should be "sharp,lq101r1sx01"
- reg: DSI virtual channel of the peripheral
Required properties (for DSI-LINK1 only):
- link2: phandle to the DSI peripheral on the secondary link. Note that the
presence of this property marks the containing node as DSI-LINK1.
- power-supply: phandle of the regulator that provides the supply voltage
Optional properties (for DSI-LINK1 only):
- backlight: phandle of the backlight device attached to the panel
Example:
dsi@54300000 {
panel: panel@0 {
compatible = "sharp,lq101r1sx01";
reg = <0>;
link2 = <&secondary>;
power-supply = <...>;
backlight = <...>;
};
};
dsi@54400000 {
secondary: panel@0 {
compatible = "sharp,lq101r1sx01";
reg = <0>;
};
};

Просмотреть файл

@ -66,8 +66,10 @@ gmt Global Mixed-mode Technology, Inc.
google Google, Inc.
gumstix Gumstix, Inc.
gw Gateworks Corporation
hannstar HannStar Display Corporation
haoyu Haoyu Microelectronic Co. Ltd.
hisilicon Hisilicon Limited.
hit Hitachi Ltd.
honeywell Honeywell
hp Hewlett Packard
i2se I2SE GmbH

Просмотреть файл

@ -0,0 +1,88 @@
Analog Device ADV7511(W)/13 HDMI Encoders
-----------------------------------------
The ADV7511, ADV7511W and ADV7513 are HDMI audio and video transmitters
compatible with HDMI 1.4 and DVI 1.0. They support color space conversion,
S/PDIF, CEC and HDCP.
Required properties:
- compatible: Should be one of "adi,adv7511", "adi,adv7511w" or "adi,adv7513"
- reg: I2C slave address
The ADV7511 supports a large number of input data formats that differ by their
color depth, color format, clock mode, bit justification and random
arrangement of components on the data bus. The combination of the following
properties describe the input and map directly to the video input tables of the
ADV7511 datasheet that document all the supported combinations.
- adi,input-depth: Number of bits per color component at the input (8, 10 or
12).
- adi,input-colorspace: The input color space, one of "rgb", "yuv422" or
"yuv444".
- adi,input-clock: The input clock type, one of "1x" (one clock cycle per
pixel), "2x" (two clock cycles per pixel), "ddr" (one clock cycle per pixel,
data driven on both edges).
The following input format properties are required except in "rgb 1x" and
"yuv444 1x" modes, in which case they must not be specified.
- adi,input-style: The input components arrangement variant (1, 2 or 3), as
listed in the input format tables in the datasheet.
- adi,input-justification: The input bit justification ("left", "evenly",
"right").
Optional properties:
- interrupts: Specifier for the ADV7511 interrupt
- pd-gpios: Specifier for the GPIO connected to the power down signal
- adi,clock-delay: Video data clock delay relative to the pixel clock, in ps
(-1200 ps .. 1600 ps). Defaults to no delay.
- adi,embedded-sync: The input uses synchronization signals embedded in the
data stream (similar to BT.656). Defaults to separate H/V synchronization
signals.
Required nodes:
The ADV7511 has two video ports. Their connections are modelled using the OF
graph bindings specified in Documentation/devicetree/bindings/graph.txt.
- Video port 0 for the RGB or YUV input
- Video port 1 for the HDMI output
Example
-------
adv7511w: hdmi@39 {
compatible = "adi,adv7511w";
reg = <39>;
interrupt-parent = <&gpio3>;
interrupts = <29 IRQ_TYPE_EDGE_FALLING>;
adi,input-depth = <8>;
adi,input-colorspace = "rgb";
adi,input-clock = "1x";
adi,input-style = <1>;
adi,input-justification = "evenly";
ports {
#address-cells = <1>;
#size-cells = <0>;
port@0 {
reg = <0>;
adv7511w_in: endpoint {
remote-endpoint = <&dpi_out>;
};
};
port@1 {
reg = <1>;
adv7511_out: endpoint {
remote-endpoint = <&hdmi_connector_in>;
};
};
};
};

Просмотреть файл

@ -4,6 +4,7 @@ Required properties:
- compatible: value should be one of the following
"samsung,exynos3250-mipi-dsi" /* for Exynos3250/3472 SoCs */
"samsung,exynos4210-mipi-dsi" /* for Exynos4 SoCs */
"samsung,exynos4415-mipi-dsi" /* for Exynos4415 SoC */
"samsung,exynos5410-mipi-dsi" /* for Exynos5410/5420/5440 SoCs */
- reg: physical base address and length of the registers set for the device
- interrupts: should contain DSI interrupt

Просмотреть файл

@ -0,0 +1,19 @@
Rockchip DRM master device
================================
The Rockchip DRM master device is a virtual device needed to list all
vop devices or other display interface nodes that comprise the
graphics subsystem.
Required properties:
- compatible: Should be "rockchip,display-subsystem"
- ports: Should contain a list of phandles pointing to display interface port
of vop devices. vop definitions as defined in
Documentation/devicetree/bindings/video/rockchip-vop.txt
example:
display-subsystem {
compatible = "rockchip,display-subsystem";
ports = <&vopl_out>, <&vopb_out>;
};

Просмотреть файл

@ -0,0 +1,58 @@
device-tree bindings for rockchip soc display controller (vop)
VOP (Visual Output Processor) is the Display Controller for the Rockchip
series of SoCs which transfers the image data from a video memory
buffer to an external LCD interface.
Required properties:
- compatible: value should be one of the following
"rockchip,rk3288-vop";
- interrupts: should contain a list of all VOP IP block interrupts in the
order: VSYNC, LCD_SYSTEM. The interrupt specifier
format depends on the interrupt controller used.
- clocks: must include clock specifiers corresponding to entries in the
clock-names property.
- clock-names: Must contain
aclk_vop: for ddr buffer transfer.
hclk_vop: for ahb bus to R/W the phy regs.
dclk_vop: pixel clock.
- resets: Must contain an entry for each entry in reset-names.
See ../reset/reset.txt for details.
- reset-names: Must include the following entries:
- axi
- ahb
- dclk
- iommus: required a iommu node
- port: A port node with endpoint definitions as defined in
Documentation/devicetree/bindings/media/video-interfaces.txt.
Example:
SoC specific DT entry:
vopb: vopb@ff930000 {
compatible = "rockchip,rk3288-vop";
reg = <0xff930000 0x19c>;
interrupts = <GIC_SPI 15 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&cru ACLK_VOP0>, <&cru DCLK_VOP0>, <&cru HCLK_VOP0>;
clock-names = "aclk_vop", "dclk_vop", "hclk_vop";
resets = <&cru SRST_LCDC1_AXI>, <&cru SRST_LCDC1_AHB>, <&cru SRST_LCDC1_DCLK>;
reset-names = "axi", "ahb", "dclk";
iommus = <&vopb_mmu>;
vopb_out: port {
#address-cells = <1>;
#size-cells = <0>;
vopb_out_edp: endpoint@0 {
reg = <0>;
remote-endpoint=<&edp_in_vopb>;
};
vopb_out_hdmi: endpoint@1 {
reg = <1>;
remote-endpoint=<&hdmi_in_vopb>;
};
};
};

Просмотреть файл

@ -11,6 +11,7 @@ Required properties:
"samsung,s5pv210-fimd"; /* for S5PV210 SoC */
"samsung,exynos3250-fimd"; /* for Exynos3250/3472 SoCs */
"samsung,exynos4210-fimd"; /* for Exynos4 SoCs */
"samsung,exynos4415-fimd"; /* for Exynos4415 SoC */
"samsung,exynos5250-fimd"; /* for Exynos5 SoCs */
- reg: physical base address and length of the FIMD registers set.

Просмотреть файл

@ -618,6 +618,16 @@ S: Maintained
F: drivers/iommu/amd_iommu*.[ch]
F: include/linux/amd-iommu.h
AMD KFD
M: Oded Gabbay <oded.gabbay@amd.com>
L: dri-devel@lists.freedesktop.org
T: git git://people.freedesktop.org/~gabbayo/linux.git
S: Supported
F: drivers/gpu/drm/amd/amdkfd/
F: drivers/gpu/drm/radeon/radeon_kfd.c
F: drivers/gpu/drm/radeon/radeon_kfd.h
F: include/uapi/linux/kfd_ioctl.h
AMD MICROCODE UPDATE SUPPORT
M: Andreas Herrmann <herrmann.der.user@googlemail.com>
L: amd64-microcode@amd64.org
@ -3297,6 +3307,13 @@ F: drivers/gpu/drm/exynos/
F: include/drm/exynos*
F: include/uapi/drm/exynos*
DRM DRIVERS FOR FREESCALE IMX
M: Philipp Zabel <p.zabel@pengutronix.de>
L: dri-devel@lists.freedesktop.org
S: Maintained
F: drivers/gpu/drm/imx/
F: Documentation/devicetree/bindings/drm/imx/
DRM DRIVERS FOR NVIDIA TEGRA
M: Thierry Reding <thierry.reding@gmail.com>
M: Terje Bergström <tbergstrom@nvidia.com>

Просмотреть файл

@ -32,7 +32,6 @@
#include <linux/pinctrl/machine.h>
#include <linux/platform_data/camera-rcar.h>
#include <linux/platform_data/gpio-rcar.h>
#include <linux/platform_data/rcar-du.h>
#include <linux/platform_data/usb-rcar-gen2-phy.h>
#include <linux/platform_device.h>
#include <linux/phy.h>
@ -83,61 +82,6 @@
*
*/
/* DU */
static struct rcar_du_encoder_data lager_du_encoders[] = {
{
.type = RCAR_DU_ENCODER_VGA,
.output = RCAR_DU_OUTPUT_DPAD0,
}, {
.type = RCAR_DU_ENCODER_NONE,
.output = RCAR_DU_OUTPUT_LVDS1,
.connector.lvds.panel = {
.width_mm = 210,
.height_mm = 158,
.mode = {
.pixelclock = 65000000,
.hactive = 1024,
.hfront_porch = 20,
.hback_porch = 160,
.hsync_len = 136,
.vactive = 768,
.vfront_porch = 3,
.vback_porch = 29,
.vsync_len = 6,
},
},
},
};
static const struct rcar_du_platform_data lager_du_pdata __initconst = {
.encoders = lager_du_encoders,
.num_encoders = ARRAY_SIZE(lager_du_encoders),
};
static const struct resource du_resources[] __initconst = {
DEFINE_RES_MEM(0xfeb00000, 0x70000),
DEFINE_RES_MEM_NAMED(0xfeb90000, 0x1c, "lvds.0"),
DEFINE_RES_MEM_NAMED(0xfeb94000, 0x1c, "lvds.1"),
DEFINE_RES_IRQ(gic_spi(256)),
DEFINE_RES_IRQ(gic_spi(268)),
DEFINE_RES_IRQ(gic_spi(269)),
};
static void __init lager_add_du_device(void)
{
struct platform_device_info info = {
.name = "rcar-du-r8a7790",
.id = -1,
.res = du_resources,
.num_res = ARRAY_SIZE(du_resources),
.data = &lager_du_pdata,
.size_data = sizeof(lager_du_pdata),
.dma_mask = DMA_BIT_MASK(32),
};
platform_device_register_full(&info);
}
/* LEDS */
static struct gpio_led lager_leds[] = {
{
@ -800,8 +744,6 @@ static void __init lager_add_standard_devices(void)
platform_device_register_full(&ether_info);
lager_add_du_device();
platform_device_register_resndata(NULL, "qspi", 0,
qspi_resources,
ARRAY_SIZE(qspi_resources),

Просмотреть файл

@ -27,7 +27,6 @@
#include <linux/pinctrl/machine.h>
#include <linux/platform_data/camera-rcar.h>
#include <linux/platform_data/gpio-rcar.h>
#include <linux/platform_data/rcar-du.h>
#include <linux/platform_data/usb-rcar-phy.h>
#include <linux/regulator/fixed.h>
#include <linux/regulator/machine.h>
@ -171,62 +170,6 @@ static struct platform_device hspi_device = {
.num_resources = ARRAY_SIZE(hspi_resources),
};
/*
* DU
*
* The panel only specifies the [hv]display and [hv]total values. The position
* and width of the sync pulses don't matter, they're copied from VESA timings.
*/
static struct rcar_du_encoder_data du_encoders[] = {
{
.type = RCAR_DU_ENCODER_VGA,
.output = RCAR_DU_OUTPUT_DPAD0,
}, {
.type = RCAR_DU_ENCODER_LVDS,
.output = RCAR_DU_OUTPUT_DPAD1,
.connector.lvds.panel = {
.width_mm = 210,
.height_mm = 158,
.mode = {
.pixelclock = 65000000,
.hactive = 1024,
.hfront_porch = 20,
.hback_porch = 160,
.hsync_len = 136,
.vactive = 768,
.vfront_porch = 3,
.vback_porch = 29,
.vsync_len = 6,
},
},
},
};
static const struct rcar_du_platform_data du_pdata __initconst = {
.encoders = du_encoders,
.num_encoders = ARRAY_SIZE(du_encoders),
};
static const struct resource du_resources[] __initconst = {
DEFINE_RES_MEM(0xfff80000, 0x40000),
DEFINE_RES_IRQ(gic_iid(0x3f)),
};
static void __init marzen_add_du_device(void)
{
struct platform_device_info info = {
.name = "rcar-du-r8a7779",
.id = -1,
.res = du_resources,
.num_res = ARRAY_SIZE(du_resources),
.data = &du_pdata,
.size_data = sizeof(du_pdata),
.dma_mask = DMA_BIT_MASK(32),
};
platform_device_register_full(&info);
}
/* LEDS */
static struct gpio_led marzen_leds[] = {
{
@ -385,7 +328,6 @@ static void __init marzen_init(void)
platform_device_register_full(&vin1_info);
platform_device_register_full(&vin3_info);
platform_add_devices(marzen_devices, ARRAY_SIZE(marzen_devices));
marzen_add_du_device();
}
static const char *marzen_boards_compat_dt[] __initdata = {

Просмотреть файл

@ -455,6 +455,23 @@ struct intel_stolen_funcs {
u32 (*base)(int num, int slot, int func, size_t size);
};
static size_t __init gen9_stolen_size(int num, int slot, int func)
{
u16 gmch_ctrl;
gmch_ctrl = read_pci_config_16(num, slot, func, SNB_GMCH_CTRL);
gmch_ctrl >>= BDW_GMCH_GMS_SHIFT;
gmch_ctrl &= BDW_GMCH_GMS_MASK;
if (gmch_ctrl < 0xf0)
return gmch_ctrl << 25; /* 32 MB units */
else
/* 4MB increments starting at 0xf0 for 4MB */
return (gmch_ctrl - 0xf0 + 1) << 22;
}
typedef size_t (*stolen_size_fn)(int num, int slot, int func);
static const struct intel_stolen_funcs i830_stolen_funcs __initconst = {
.base = i830_stolen_base,
.size = i830_stolen_size,
@ -490,6 +507,11 @@ static const struct intel_stolen_funcs gen8_stolen_funcs __initconst = {
.size = gen8_stolen_size,
};
static const struct intel_stolen_funcs gen9_stolen_funcs __initconst = {
.base = intel_stolen_base,
.size = gen9_stolen_size,
};
static const struct intel_stolen_funcs chv_stolen_funcs __initconst = {
.base = intel_stolen_base,
.size = chv_stolen_size,
@ -523,6 +545,7 @@ static const struct pci_device_id intel_stolen_ids[] __initconst = {
INTEL_BDW_M_IDS(&gen8_stolen_funcs),
INTEL_BDW_D_IDS(&gen8_stolen_funcs),
INTEL_CHV_IDS(&chv_stolen_funcs),
INTEL_SKL_IDS(&gen9_stolen_funcs),
};
static void __init intel_graphics_stolen(int num, int slot, int func)

Просмотреть файл

@ -153,7 +153,6 @@ static struct page *i8xx_alloc_pages(void)
__free_pages(page, 2);
return NULL;
}
get_page(page);
atomic_inc(&agp_bridge->current_memory_agp);
return page;
}
@ -164,7 +163,6 @@ static void i8xx_destroy_pages(struct page *page)
return;
set_pages_wb(page, 4);
put_page(page);
__free_pages(page, 2);
atomic_dec(&agp_bridge->current_memory_agp);
}
@ -300,7 +298,6 @@ static int intel_gtt_setup_scratch_page(void)
page = alloc_page(GFP_KERNEL | GFP_DMA32 | __GFP_ZERO);
if (page == NULL)
return -ENOMEM;
get_page(page);
set_pages_uc(page, 1);
if (intel_private.needs_dmar) {
@ -560,7 +557,6 @@ static void intel_gtt_teardown_scratch_page(void)
set_pages_wb(intel_private.scratch_page, 1);
pci_unmap_page(intel_private.pcidev, intel_private.scratch_page_dma,
PAGE_SIZE, PCI_DMA_BIDIRECTIONAL);
put_page(intel_private.scratch_page);
__free_page(intel_private.scratch_page);
}

Просмотреть файл

@ -167,6 +167,8 @@ config DRM_SAVAGE
source "drivers/gpu/drm/exynos/Kconfig"
source "drivers/gpu/drm/rockchip/Kconfig"
source "drivers/gpu/drm/vmwgfx/Kconfig"
source "drivers/gpu/drm/gma500/Kconfig"
@ -200,3 +202,7 @@ source "drivers/gpu/drm/tegra/Kconfig"
source "drivers/gpu/drm/panel/Kconfig"
source "drivers/gpu/drm/sti/Kconfig"
source "drivers/gpu/drm/amd/amdkfd/Kconfig"
source "drivers/gpu/drm/imx/Kconfig"

Просмотреть файл

@ -14,7 +14,7 @@ drm-y := drm_auth.o drm_bufs.o drm_cache.o \
drm_info.o drm_debugfs.o drm_encoder_slave.o \
drm_trace_points.o drm_global.o drm_prime.o \
drm_rect.o drm_vma_manager.o drm_flip_work.o \
drm_modeset_lock.o
drm_modeset_lock.o drm_atomic.o
drm-$(CONFIG_COMPAT) += drm_ioc32.o
drm-$(CONFIG_DRM_GEM_CMA_HELPER) += drm_gem_cma_helper.o
@ -23,7 +23,7 @@ drm-$(CONFIG_DRM_PANEL) += drm_panel.o
drm-$(CONFIG_OF) += drm_of.o
drm_kms_helper-y := drm_crtc_helper.o drm_dp_helper.o drm_probe_helper.o \
drm_plane_helper.o drm_dp_mst_topology.o
drm_plane_helper.o drm_dp_mst_topology.o drm_atomic_helper.o
drm_kms_helper-$(CONFIG_DRM_LOAD_EDID_FIRMWARE) += drm_edid_load.o
drm_kms_helper-$(CONFIG_DRM_KMS_FB_HELPER) += drm_fb_helper.o
drm_kms_helper-$(CONFIG_DRM_KMS_CMA_HELPER) += drm_fb_cma_helper.o
@ -49,6 +49,7 @@ obj-$(CONFIG_DRM_VMWGFX)+= vmwgfx/
obj-$(CONFIG_DRM_VIA) +=via/
obj-$(CONFIG_DRM_NOUVEAU) +=nouveau/
obj-$(CONFIG_DRM_EXYNOS) +=exynos/
obj-$(CONFIG_DRM_ROCKCHIP) +=rockchip/
obj-$(CONFIG_DRM_GMA500) += gma500/
obj-$(CONFIG_DRM_UDL) += udl/
obj-$(CONFIG_DRM_AST) += ast/
@ -62,6 +63,8 @@ obj-$(CONFIG_DRM_BOCHS) += bochs/
obj-$(CONFIG_DRM_MSM) += msm/
obj-$(CONFIG_DRM_TEGRA) += tegra/
obj-$(CONFIG_DRM_STI) += sti/
obj-$(CONFIG_DRM_IMX) += imx/
obj-y += i2c/
obj-y += panel/
obj-y += bridge/
obj-$(CONFIG_HSA_AMD) += amd/amdkfd/

Просмотреть файл

@ -1,43 +0,0 @@
************************************************************
* For the very latest on DRI development, please see: *
* http://dri.freedesktop.org/ *
************************************************************
The Direct Rendering Manager (drm) is a device-independent kernel-level
device driver that provides support for the XFree86 Direct Rendering
Infrastructure (DRI).
The DRM supports the Direct Rendering Infrastructure (DRI) in four major
ways:
1. The DRM provides synchronized access to the graphics hardware via
the use of an optimized two-tiered lock.
2. The DRM enforces the DRI security policy for access to the graphics
hardware by only allowing authenticated X11 clients access to
restricted regions of memory.
3. The DRM provides a generic DMA engine, complete with multiple
queues and the ability to detect the need for an OpenGL context
switch.
4. The DRM is extensible via the use of small device-specific modules
that rely extensively on the API exported by the DRM module.
Documentation on the DRI is available from:
http://dri.freedesktop.org/wiki/Documentation
http://sourceforge.net/project/showfiles.php?group_id=387
http://dri.sourceforge.net/doc/
For specific information about kernel-level support, see:
The Direct Rendering Manager, Kernel Support for the Direct Rendering
Infrastructure
http://dri.sourceforge.net/doc/drm_low_level.html
Hardware Locking for the Direct Rendering Infrastructure
http://dri.sourceforge.net/doc/hardware_locking_low_level.html
A Security Analysis of the Direct Rendering Infrastructure
http://dri.sourceforge.net/doc/security_low_level.html

Просмотреть файл

@ -0,0 +1,9 @@
#
# Heterogenous system architecture configuration
#
config HSA_AMD
tristate "HSA kernel driver for AMD GPU devices"
depends on DRM_RADEON && AMD_IOMMU_V2 && X86_64
help
Enable this if you want to use HSA features on AMD GPU devices.

Просмотреть файл

@ -0,0 +1,14 @@
#
# Makefile for Heterogenous System Architecture support for AMD GPU devices
#
ccflags-y := -Iinclude/drm -Idrivers/gpu/drm/amd/include/
amdkfd-y := kfd_module.o kfd_device.o kfd_chardev.o kfd_topology.o \
kfd_pasid.o kfd_doorbell.o kfd_flat_memory.o \
kfd_process.o kfd_queue.o kfd_mqd_manager.o \
kfd_kernel_queue.o kfd_packet_manager.o \
kfd_process_queue_manager.o kfd_device_queue_manager.o \
kfd_interrupt.o
obj-$(CONFIG_HSA_AMD) += amdkfd.o

Просмотреть файл

@ -0,0 +1,221 @@
/*
* Copyright 2014 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*/
#ifndef CIK_REGS_H
#define CIK_REGS_H
#define IH_VMID_0_LUT 0x3D40u
#define BIF_DOORBELL_CNTL 0x530Cu
#define SRBM_GFX_CNTL 0xE44
#define PIPEID(x) ((x) << 0)
#define MEID(x) ((x) << 2)
#define VMID(x) ((x) << 4)
#define QUEUEID(x) ((x) << 8)
#define SQ_CONFIG 0x8C00
#define SH_MEM_BASES 0x8C28
/* if PTR32, these are the bases for scratch and lds */
#define PRIVATE_BASE(x) ((x) << 0) /* scratch */
#define SHARED_BASE(x) ((x) << 16) /* LDS */
#define SH_MEM_APE1_BASE 0x8C2C
/* if PTR32, this is the base location of GPUVM */
#define SH_MEM_APE1_LIMIT 0x8C30
/* if PTR32, this is the upper limit of GPUVM */
#define SH_MEM_CONFIG 0x8C34
#define PTR32 (1 << 0)
#define PRIVATE_ATC (1 << 1)
#define ALIGNMENT_MODE(x) ((x) << 2)
#define SH_MEM_ALIGNMENT_MODE_DWORD 0
#define SH_MEM_ALIGNMENT_MODE_DWORD_STRICT 1
#define SH_MEM_ALIGNMENT_MODE_STRICT 2
#define SH_MEM_ALIGNMENT_MODE_UNALIGNED 3
#define DEFAULT_MTYPE(x) ((x) << 4)
#define APE1_MTYPE(x) ((x) << 7)
/* valid for both DEFAULT_MTYPE and APE1_MTYPE */
#define MTYPE_CACHED 0
#define MTYPE_NONCACHED 3
#define SH_STATIC_MEM_CONFIG 0x9604u
#define TC_CFG_L1_LOAD_POLICY0 0xAC68
#define TC_CFG_L1_LOAD_POLICY1 0xAC6C
#define TC_CFG_L1_STORE_POLICY 0xAC70
#define TC_CFG_L2_LOAD_POLICY0 0xAC74
#define TC_CFG_L2_LOAD_POLICY1 0xAC78
#define TC_CFG_L2_STORE_POLICY0 0xAC7C
#define TC_CFG_L2_STORE_POLICY1 0xAC80
#define TC_CFG_L2_ATOMIC_POLICY 0xAC84
#define TC_CFG_L1_VOLATILE 0xAC88
#define TC_CFG_L2_VOLATILE 0xAC8C
#define CP_PQ_WPTR_POLL_CNTL 0xC20C
#define WPTR_POLL_EN (1 << 31)
#define CPC_INT_CNTL 0xC2D0
#define CP_ME1_PIPE0_INT_CNTL 0xC214
#define CP_ME1_PIPE1_INT_CNTL 0xC218
#define CP_ME1_PIPE2_INT_CNTL 0xC21C
#define CP_ME1_PIPE3_INT_CNTL 0xC220
#define CP_ME2_PIPE0_INT_CNTL 0xC224
#define CP_ME2_PIPE1_INT_CNTL 0xC228
#define CP_ME2_PIPE2_INT_CNTL 0xC22C
#define CP_ME2_PIPE3_INT_CNTL 0xC230
#define DEQUEUE_REQUEST_INT_ENABLE (1 << 13)
#define WRM_POLL_TIMEOUT_INT_ENABLE (1 << 17)
#define PRIV_REG_INT_ENABLE (1 << 23)
#define TIME_STAMP_INT_ENABLE (1 << 26)
#define GENERIC2_INT_ENABLE (1 << 29)
#define GENERIC1_INT_ENABLE (1 << 30)
#define GENERIC0_INT_ENABLE (1 << 31)
#define CP_ME1_PIPE0_INT_STATUS 0xC214
#define CP_ME1_PIPE1_INT_STATUS 0xC218
#define CP_ME1_PIPE2_INT_STATUS 0xC21C
#define CP_ME1_PIPE3_INT_STATUS 0xC220
#define CP_ME2_PIPE0_INT_STATUS 0xC224
#define CP_ME2_PIPE1_INT_STATUS 0xC228
#define CP_ME2_PIPE2_INT_STATUS 0xC22C
#define CP_ME2_PIPE3_INT_STATUS 0xC230
#define DEQUEUE_REQUEST_INT_STATUS (1 << 13)
#define WRM_POLL_TIMEOUT_INT_STATUS (1 << 17)
#define PRIV_REG_INT_STATUS (1 << 23)
#define TIME_STAMP_INT_STATUS (1 << 26)
#define GENERIC2_INT_STATUS (1 << 29)
#define GENERIC1_INT_STATUS (1 << 30)
#define GENERIC0_INT_STATUS (1 << 31)
#define CP_HPD_EOP_BASE_ADDR 0xC904
#define CP_HPD_EOP_BASE_ADDR_HI 0xC908
#define CP_HPD_EOP_VMID 0xC90C
#define CP_HPD_EOP_CONTROL 0xC910
#define EOP_SIZE(x) ((x) << 0)
#define EOP_SIZE_MASK (0x3f << 0)
#define CP_MQD_BASE_ADDR 0xC914
#define CP_MQD_BASE_ADDR_HI 0xC918
#define CP_HQD_ACTIVE 0xC91C
#define CP_HQD_VMID 0xC920
#define CP_HQD_PERSISTENT_STATE 0xC924u
#define DEFAULT_CP_HQD_PERSISTENT_STATE (0x33U << 8)
#define PRELOAD_REQ (1 << 0)
#define CP_HQD_PIPE_PRIORITY 0xC928u
#define CP_HQD_QUEUE_PRIORITY 0xC92Cu
#define CP_HQD_QUANTUM 0xC930u
#define QUANTUM_EN 1U
#define QUANTUM_SCALE_1MS (1U << 4)
#define QUANTUM_DURATION(x) ((x) << 8)
#define CP_HQD_PQ_BASE 0xC934
#define CP_HQD_PQ_BASE_HI 0xC938
#define CP_HQD_PQ_RPTR 0xC93C
#define CP_HQD_PQ_RPTR_REPORT_ADDR 0xC940
#define CP_HQD_PQ_RPTR_REPORT_ADDR_HI 0xC944
#define CP_HQD_PQ_WPTR_POLL_ADDR 0xC948
#define CP_HQD_PQ_WPTR_POLL_ADDR_HI 0xC94C
#define CP_HQD_PQ_DOORBELL_CONTROL 0xC950
#define DOORBELL_OFFSET(x) ((x) << 2)
#define DOORBELL_OFFSET_MASK (0x1fffff << 2)
#define DOORBELL_SOURCE (1 << 28)
#define DOORBELL_SCHD_HIT (1 << 29)
#define DOORBELL_EN (1 << 30)
#define DOORBELL_HIT (1 << 31)
#define CP_HQD_PQ_WPTR 0xC954
#define CP_HQD_PQ_CONTROL 0xC958
#define QUEUE_SIZE(x) ((x) << 0)
#define QUEUE_SIZE_MASK (0x3f << 0)
#define RPTR_BLOCK_SIZE(x) ((x) << 8)
#define RPTR_BLOCK_SIZE_MASK (0x3f << 8)
#define MIN_AVAIL_SIZE(x) ((x) << 20)
#define PQ_ATC_EN (1 << 23)
#define PQ_VOLATILE (1 << 26)
#define NO_UPDATE_RPTR (1 << 27)
#define UNORD_DISPATCH (1 << 28)
#define ROQ_PQ_IB_FLIP (1 << 29)
#define PRIV_STATE (1 << 30)
#define KMD_QUEUE (1 << 31)
#define DEFAULT_RPTR_BLOCK_SIZE RPTR_BLOCK_SIZE(5)
#define DEFAULT_MIN_AVAIL_SIZE MIN_AVAIL_SIZE(3)
#define CP_HQD_IB_BASE_ADDR 0xC95Cu
#define CP_HQD_IB_BASE_ADDR_HI 0xC960u
#define CP_HQD_IB_RPTR 0xC964u
#define CP_HQD_IB_CONTROL 0xC968u
#define IB_ATC_EN (1U << 23)
#define DEFAULT_MIN_IB_AVAIL_SIZE (3U << 20)
#define CP_HQD_DEQUEUE_REQUEST 0xC974
#define DEQUEUE_REQUEST_DRAIN 1
#define DEQUEUE_REQUEST_RESET 2
#define DEQUEUE_INT (1U << 8)
#define CP_HQD_SEMA_CMD 0xC97Cu
#define CP_HQD_MSG_TYPE 0xC980u
#define CP_HQD_ATOMIC0_PREOP_LO 0xC984u
#define CP_HQD_ATOMIC0_PREOP_HI 0xC988u
#define CP_HQD_ATOMIC1_PREOP_LO 0xC98Cu
#define CP_HQD_ATOMIC1_PREOP_HI 0xC990u
#define CP_HQD_HQ_SCHEDULER0 0xC994u
#define CP_HQD_HQ_SCHEDULER1 0xC998u
#define CP_MQD_CONTROL 0xC99C
#define MQD_VMID(x) ((x) << 0)
#define MQD_VMID_MASK (0xf << 0)
#define MQD_CONTROL_PRIV_STATE_EN (1U << 8)
#define GRBM_GFX_INDEX 0x30800
#define INSTANCE_INDEX(x) ((x) << 0)
#define SH_INDEX(x) ((x) << 8)
#define SE_INDEX(x) ((x) << 16)
#define SH_BROADCAST_WRITES (1 << 29)
#define INSTANCE_BROADCAST_WRITES (1 << 30)
#define SE_BROADCAST_WRITES (1 << 31)
#define SQC_CACHES 0x30d20
#define SQC_POLICY 0x8C38u
#define SQC_VOLATILE 0x8C3Cu
#define CP_PERFMON_CNTL 0x36020
#define ATC_VMID0_PASID_MAPPING 0x339Cu
#define ATC_VMID_PASID_MAPPING_UPDATE_STATUS 0x3398u
#define ATC_VMID_PASID_MAPPING_VALID (1U << 31)
#define ATC_VM_APERTURE0_CNTL 0x3310u
#define ATS_ACCESS_MODE_NEVER 0
#define ATS_ACCESS_MODE_ALWAYS 1
#define ATC_VM_APERTURE0_CNTL2 0x3318u
#define ATC_VM_APERTURE0_HIGH_ADDR 0x3308u
#define ATC_VM_APERTURE0_LOW_ADDR 0x3300u
#define ATC_VM_APERTURE1_CNTL 0x3314u
#define ATC_VM_APERTURE1_CNTL2 0x331Cu
#define ATC_VM_APERTURE1_HIGH_ADDR 0x330Cu
#define ATC_VM_APERTURE1_LOW_ADDR 0x3304u
#endif

Просмотреть файл

@ -0,0 +1,595 @@
/*
* Copyright 2014 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*/
#include <linux/device.h>
#include <linux/export.h>
#include <linux/err.h>
#include <linux/fs.h>
#include <linux/sched.h>
#include <linux/slab.h>
#include <linux/uaccess.h>
#include <linux/compat.h>
#include <uapi/linux/kfd_ioctl.h>
#include <linux/time.h>
#include <linux/mm.h>
#include <linux/uaccess.h>
#include <uapi/asm-generic/mman-common.h>
#include <asm/processor.h>
#include "kfd_priv.h"
#include "kfd_device_queue_manager.h"
static long kfd_ioctl(struct file *, unsigned int, unsigned long);
static int kfd_open(struct inode *, struct file *);
static int kfd_mmap(struct file *, struct vm_area_struct *);
static const char kfd_dev_name[] = "kfd";
static const struct file_operations kfd_fops = {
.owner = THIS_MODULE,
.unlocked_ioctl = kfd_ioctl,
.compat_ioctl = kfd_ioctl,
.open = kfd_open,
.mmap = kfd_mmap,
};
static int kfd_char_dev_major = -1;
static struct class *kfd_class;
struct device *kfd_device;
int kfd_chardev_init(void)
{
int err = 0;
kfd_char_dev_major = register_chrdev(0, kfd_dev_name, &kfd_fops);
err = kfd_char_dev_major;
if (err < 0)
goto err_register_chrdev;
kfd_class = class_create(THIS_MODULE, kfd_dev_name);
err = PTR_ERR(kfd_class);
if (IS_ERR(kfd_class))
goto err_class_create;
kfd_device = device_create(kfd_class, NULL,
MKDEV(kfd_char_dev_major, 0),
NULL, kfd_dev_name);
err = PTR_ERR(kfd_device);
if (IS_ERR(kfd_device))
goto err_device_create;
return 0;
err_device_create:
class_destroy(kfd_class);
err_class_create:
unregister_chrdev(kfd_char_dev_major, kfd_dev_name);
err_register_chrdev:
return err;
}
void kfd_chardev_exit(void)
{
device_destroy(kfd_class, MKDEV(kfd_char_dev_major, 0));
class_destroy(kfd_class);
unregister_chrdev(kfd_char_dev_major, kfd_dev_name);
}
struct device *kfd_chardev(void)
{
return kfd_device;
}
static int kfd_open(struct inode *inode, struct file *filep)
{
struct kfd_process *process;
bool is_32bit_user_mode;
if (iminor(inode) != 0)
return -ENODEV;
is_32bit_user_mode = is_compat_task();
if (is_32bit_user_mode == true) {
dev_warn(kfd_device,
"Process %d (32-bit) failed to open /dev/kfd\n"
"32-bit processes are not supported by amdkfd\n",
current->pid);
return -EPERM;
}
process = kfd_create_process(current);
if (IS_ERR(process))
return PTR_ERR(process);
process->is_32bit_user_mode = is_32bit_user_mode;
dev_dbg(kfd_device, "process %d opened, compat mode (32 bit) - %d\n",
process->pasid, process->is_32bit_user_mode);
kfd_init_apertures(process);
return 0;
}
static long kfd_ioctl_get_version(struct file *filep, struct kfd_process *p,
void __user *arg)
{
struct kfd_ioctl_get_version_args args;
int err = 0;
args.major_version = KFD_IOCTL_MAJOR_VERSION;
args.minor_version = KFD_IOCTL_MINOR_VERSION;
if (copy_to_user(arg, &args, sizeof(args)))
err = -EFAULT;
return err;
}
static int set_queue_properties_from_user(struct queue_properties *q_properties,
struct kfd_ioctl_create_queue_args *args)
{
if (args->queue_percentage > KFD_MAX_QUEUE_PERCENTAGE) {
pr_err("kfd: queue percentage must be between 0 to KFD_MAX_QUEUE_PERCENTAGE\n");
return -EINVAL;
}
if (args->queue_priority > KFD_MAX_QUEUE_PRIORITY) {
pr_err("kfd: queue priority must be between 0 to KFD_MAX_QUEUE_PRIORITY\n");
return -EINVAL;
}
if ((args->ring_base_address) &&
(!access_ok(VERIFY_WRITE,
(const void __user *) args->ring_base_address,
sizeof(uint64_t)))) {
pr_err("kfd: can't access ring base address\n");
return -EFAULT;
}
if (!is_power_of_2(args->ring_size) && (args->ring_size != 0)) {
pr_err("kfd: ring size must be a power of 2 or 0\n");
return -EINVAL;
}
if (!access_ok(VERIFY_WRITE,
(const void __user *) args->read_pointer_address,
sizeof(uint32_t))) {
pr_err("kfd: can't access read pointer\n");
return -EFAULT;
}
if (!access_ok(VERIFY_WRITE,
(const void __user *) args->write_pointer_address,
sizeof(uint32_t))) {
pr_err("kfd: can't access write pointer\n");
return -EFAULT;
}
q_properties->is_interop = false;
q_properties->queue_percent = args->queue_percentage;
q_properties->priority = args->queue_priority;
q_properties->queue_address = args->ring_base_address;
q_properties->queue_size = args->ring_size;
q_properties->read_ptr = (uint32_t *) args->read_pointer_address;
q_properties->write_ptr = (uint32_t *) args->write_pointer_address;
if (args->queue_type == KFD_IOC_QUEUE_TYPE_COMPUTE ||
args->queue_type == KFD_IOC_QUEUE_TYPE_COMPUTE_AQL)
q_properties->type = KFD_QUEUE_TYPE_COMPUTE;
else
return -ENOTSUPP;
if (args->queue_type == KFD_IOC_QUEUE_TYPE_COMPUTE_AQL)
q_properties->format = KFD_QUEUE_FORMAT_AQL;
else
q_properties->format = KFD_QUEUE_FORMAT_PM4;
pr_debug("Queue Percentage (%d, %d)\n",
q_properties->queue_percent, args->queue_percentage);
pr_debug("Queue Priority (%d, %d)\n",
q_properties->priority, args->queue_priority);
pr_debug("Queue Address (0x%llX, 0x%llX)\n",
q_properties->queue_address, args->ring_base_address);
pr_debug("Queue Size (0x%llX, %u)\n",
q_properties->queue_size, args->ring_size);
pr_debug("Queue r/w Pointers (0x%llX, 0x%llX)\n",
(uint64_t) q_properties->read_ptr,
(uint64_t) q_properties->write_ptr);
pr_debug("Queue Format (%d)\n", q_properties->format);
return 0;
}
static long kfd_ioctl_create_queue(struct file *filep, struct kfd_process *p,
void __user *arg)
{
struct kfd_ioctl_create_queue_args args;
struct kfd_dev *dev;
int err = 0;
unsigned int queue_id;
struct kfd_process_device *pdd;
struct queue_properties q_properties;
memset(&q_properties, 0, sizeof(struct queue_properties));
if (copy_from_user(&args, arg, sizeof(args)))
return -EFAULT;
pr_debug("kfd: creating queue ioctl\n");
err = set_queue_properties_from_user(&q_properties, &args);
if (err)
return err;
dev = kfd_device_by_id(args.gpu_id);
if (dev == NULL)
return -EINVAL;
mutex_lock(&p->mutex);
pdd = kfd_bind_process_to_device(dev, p);
if (IS_ERR(pdd)) {
err = PTR_ERR(pdd);
goto err_bind_process;
}
pr_debug("kfd: creating queue for PASID %d on GPU 0x%x\n",
p->pasid,
dev->id);
err = pqm_create_queue(&p->pqm, dev, filep, &q_properties, 0,
KFD_QUEUE_TYPE_COMPUTE, &queue_id);
if (err != 0)
goto err_create_queue;
args.queue_id = queue_id;
/* Return gpu_id as doorbell offset for mmap usage */
args.doorbell_offset = args.gpu_id << PAGE_SHIFT;
if (copy_to_user(arg, &args, sizeof(args))) {
err = -EFAULT;
goto err_copy_args_out;
}
mutex_unlock(&p->mutex);
pr_debug("kfd: queue id %d was created successfully\n", args.queue_id);
pr_debug("ring buffer address == 0x%016llX\n",
args.ring_base_address);
pr_debug("read ptr address == 0x%016llX\n",
args.read_pointer_address);
pr_debug("write ptr address == 0x%016llX\n",
args.write_pointer_address);
return 0;
err_copy_args_out:
pqm_destroy_queue(&p->pqm, queue_id);
err_create_queue:
err_bind_process:
mutex_unlock(&p->mutex);
return err;
}
static int kfd_ioctl_destroy_queue(struct file *filp, struct kfd_process *p,
void __user *arg)
{
int retval;
struct kfd_ioctl_destroy_queue_args args;
if (copy_from_user(&args, arg, sizeof(args)))
return -EFAULT;
pr_debug("kfd: destroying queue id %d for PASID %d\n",
args.queue_id,
p->pasid);
mutex_lock(&p->mutex);
retval = pqm_destroy_queue(&p->pqm, args.queue_id);
mutex_unlock(&p->mutex);
return retval;
}
static int kfd_ioctl_update_queue(struct file *filp, struct kfd_process *p,
void __user *arg)
{
int retval;
struct kfd_ioctl_update_queue_args args;
struct queue_properties properties;
if (copy_from_user(&args, arg, sizeof(args)))
return -EFAULT;
if (args.queue_percentage > KFD_MAX_QUEUE_PERCENTAGE) {
pr_err("kfd: queue percentage must be between 0 to KFD_MAX_QUEUE_PERCENTAGE\n");
return -EINVAL;
}
if (args.queue_priority > KFD_MAX_QUEUE_PRIORITY) {
pr_err("kfd: queue priority must be between 0 to KFD_MAX_QUEUE_PRIORITY\n");
return -EINVAL;
}
if ((args.ring_base_address) &&
(!access_ok(VERIFY_WRITE,
(const void __user *) args.ring_base_address,
sizeof(uint64_t)))) {
pr_err("kfd: can't access ring base address\n");
return -EFAULT;
}
if (!is_power_of_2(args.ring_size) && (args.ring_size != 0)) {
pr_err("kfd: ring size must be a power of 2 or 0\n");
return -EINVAL;
}
properties.queue_address = args.ring_base_address;
properties.queue_size = args.ring_size;
properties.queue_percent = args.queue_percentage;
properties.priority = args.queue_priority;
pr_debug("kfd: updating queue id %d for PASID %d\n",
args.queue_id, p->pasid);
mutex_lock(&p->mutex);
retval = pqm_update_queue(&p->pqm, args.queue_id, &properties);
mutex_unlock(&p->mutex);
return retval;
}
static long kfd_ioctl_set_memory_policy(struct file *filep,
struct kfd_process *p, void __user *arg)
{
struct kfd_ioctl_set_memory_policy_args args;
struct kfd_dev *dev;
int err = 0;
struct kfd_process_device *pdd;
enum cache_policy default_policy, alternate_policy;
if (copy_from_user(&args, arg, sizeof(args)))
return -EFAULT;
if (args.default_policy != KFD_IOC_CACHE_POLICY_COHERENT
&& args.default_policy != KFD_IOC_CACHE_POLICY_NONCOHERENT) {
return -EINVAL;
}
if (args.alternate_policy != KFD_IOC_CACHE_POLICY_COHERENT
&& args.alternate_policy != KFD_IOC_CACHE_POLICY_NONCOHERENT) {
return -EINVAL;
}
dev = kfd_device_by_id(args.gpu_id);
if (dev == NULL)
return -EINVAL;
mutex_lock(&p->mutex);
pdd = kfd_bind_process_to_device(dev, p);
if (IS_ERR(pdd)) {
err = PTR_ERR(pdd);
goto out;
}
default_policy = (args.default_policy == KFD_IOC_CACHE_POLICY_COHERENT)
? cache_policy_coherent : cache_policy_noncoherent;
alternate_policy =
(args.alternate_policy == KFD_IOC_CACHE_POLICY_COHERENT)
? cache_policy_coherent : cache_policy_noncoherent;
if (!dev->dqm->set_cache_memory_policy(dev->dqm,
&pdd->qpd,
default_policy,
alternate_policy,
(void __user *)args.alternate_aperture_base,
args.alternate_aperture_size))
err = -EINVAL;
out:
mutex_unlock(&p->mutex);
return err;
}
static long kfd_ioctl_get_clock_counters(struct file *filep,
struct kfd_process *p, void __user *arg)
{
struct kfd_ioctl_get_clock_counters_args args;
struct kfd_dev *dev;
struct timespec time;
if (copy_from_user(&args, arg, sizeof(args)))
return -EFAULT;
dev = kfd_device_by_id(args.gpu_id);
if (dev == NULL)
return -EINVAL;
/* Reading GPU clock counter from KGD */
args.gpu_clock_counter = kfd2kgd->get_gpu_clock_counter(dev->kgd);
/* No access to rdtsc. Using raw monotonic time */
getrawmonotonic(&time);
args.cpu_clock_counter = (uint64_t)timespec_to_ns(&time);
get_monotonic_boottime(&time);
args.system_clock_counter = (uint64_t)timespec_to_ns(&time);
/* Since the counter is in nano-seconds we use 1GHz frequency */
args.system_clock_freq = 1000000000;
if (copy_to_user(arg, &args, sizeof(args)))
return -EFAULT;
return 0;
}
static int kfd_ioctl_get_process_apertures(struct file *filp,
struct kfd_process *p, void __user *arg)
{
struct kfd_ioctl_get_process_apertures_args args;
struct kfd_process_device_apertures *pAperture;
struct kfd_process_device *pdd;
dev_dbg(kfd_device, "get apertures for PASID %d", p->pasid);
if (copy_from_user(&args, arg, sizeof(args)))
return -EFAULT;
args.num_of_nodes = 0;
mutex_lock(&p->mutex);
/*if the process-device list isn't empty*/
if (kfd_has_process_device_data(p)) {
/* Run over all pdd of the process */
pdd = kfd_get_first_process_device_data(p);
do {
pAperture = &args.process_apertures[args.num_of_nodes];
pAperture->gpu_id = pdd->dev->id;
pAperture->lds_base = pdd->lds_base;
pAperture->lds_limit = pdd->lds_limit;
pAperture->gpuvm_base = pdd->gpuvm_base;
pAperture->gpuvm_limit = pdd->gpuvm_limit;
pAperture->scratch_base = pdd->scratch_base;
pAperture->scratch_limit = pdd->scratch_limit;
dev_dbg(kfd_device,
"node id %u\n", args.num_of_nodes);
dev_dbg(kfd_device,
"gpu id %u\n", pdd->dev->id);
dev_dbg(kfd_device,
"lds_base %llX\n", pdd->lds_base);
dev_dbg(kfd_device,
"lds_limit %llX\n", pdd->lds_limit);
dev_dbg(kfd_device,
"gpuvm_base %llX\n", pdd->gpuvm_base);
dev_dbg(kfd_device,
"gpuvm_limit %llX\n", pdd->gpuvm_limit);
dev_dbg(kfd_device,
"scratch_base %llX\n", pdd->scratch_base);
dev_dbg(kfd_device,
"scratch_limit %llX\n", pdd->scratch_limit);
args.num_of_nodes++;
} while ((pdd = kfd_get_next_process_device_data(p, pdd)) != NULL &&
(args.num_of_nodes < NUM_OF_SUPPORTED_GPUS));
}
mutex_unlock(&p->mutex);
if (copy_to_user(arg, &args, sizeof(args)))
return -EFAULT;
return 0;
}
static long kfd_ioctl(struct file *filep, unsigned int cmd, unsigned long arg)
{
struct kfd_process *process;
long err = -EINVAL;
dev_dbg(kfd_device,
"ioctl cmd 0x%x (#%d), arg 0x%lx\n",
cmd, _IOC_NR(cmd), arg);
process = kfd_get_process(current);
if (IS_ERR(process))
return PTR_ERR(process);
switch (cmd) {
case KFD_IOC_GET_VERSION:
err = kfd_ioctl_get_version(filep, process, (void __user *)arg);
break;
case KFD_IOC_CREATE_QUEUE:
err = kfd_ioctl_create_queue(filep, process,
(void __user *)arg);
break;
case KFD_IOC_DESTROY_QUEUE:
err = kfd_ioctl_destroy_queue(filep, process,
(void __user *)arg);
break;
case KFD_IOC_SET_MEMORY_POLICY:
err = kfd_ioctl_set_memory_policy(filep, process,
(void __user *)arg);
break;
case KFD_IOC_GET_CLOCK_COUNTERS:
err = kfd_ioctl_get_clock_counters(filep, process,
(void __user *)arg);
break;
case KFD_IOC_GET_PROCESS_APERTURES:
err = kfd_ioctl_get_process_apertures(filep, process,
(void __user *)arg);
break;
case KFD_IOC_UPDATE_QUEUE:
err = kfd_ioctl_update_queue(filep, process,
(void __user *)arg);
break;
default:
dev_err(kfd_device,
"unknown ioctl cmd 0x%x, arg 0x%lx)\n",
cmd, arg);
err = -EINVAL;
break;
}
if (err < 0)
dev_err(kfd_device,
"ioctl error %ld for ioctl cmd 0x%x (#%d)\n",
err, cmd, _IOC_NR(cmd));
return err;
}
static int kfd_mmap(struct file *filp, struct vm_area_struct *vma)
{
struct kfd_process *process;
process = kfd_get_process(current);
if (IS_ERR(process))
return PTR_ERR(process);
return kfd_doorbell_mmap(process, vma);
}

Просмотреть файл

@ -0,0 +1,294 @@
/*
* Copyright 2014 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*/
#ifndef KFD_CRAT_H_INCLUDED
#define KFD_CRAT_H_INCLUDED
#include <linux/types.h>
#pragma pack(1)
/*
* 4CC signature values for the CRAT and CDIT ACPI tables
*/
#define CRAT_SIGNATURE "CRAT"
#define CDIT_SIGNATURE "CDIT"
/*
* Component Resource Association Table (CRAT)
*/
#define CRAT_OEMID_LENGTH 6
#define CRAT_OEMTABLEID_LENGTH 8
#define CRAT_RESERVED_LENGTH 6
#define CRAT_OEMID_64BIT_MASK ((1ULL << (CRAT_OEMID_LENGTH * 8)) - 1)
struct crat_header {
uint32_t signature;
uint32_t length;
uint8_t revision;
uint8_t checksum;
uint8_t oem_id[CRAT_OEMID_LENGTH];
uint8_t oem_table_id[CRAT_OEMTABLEID_LENGTH];
uint32_t oem_revision;
uint32_t creator_id;
uint32_t creator_revision;
uint32_t total_entries;
uint16_t num_domains;
uint8_t reserved[CRAT_RESERVED_LENGTH];
};
/*
* The header structure is immediately followed by total_entries of the
* data definitions
*/
/*
* The currently defined subtype entries in the CRAT
*/
#define CRAT_SUBTYPE_COMPUTEUNIT_AFFINITY 0
#define CRAT_SUBTYPE_MEMORY_AFFINITY 1
#define CRAT_SUBTYPE_CACHE_AFFINITY 2
#define CRAT_SUBTYPE_TLB_AFFINITY 3
#define CRAT_SUBTYPE_CCOMPUTE_AFFINITY 4
#define CRAT_SUBTYPE_IOLINK_AFFINITY 5
#define CRAT_SUBTYPE_MAX 6
#define CRAT_SIBLINGMAP_SIZE 32
/*
* ComputeUnit Affinity structure and definitions
*/
#define CRAT_CU_FLAGS_ENABLED 0x00000001
#define CRAT_CU_FLAGS_HOT_PLUGGABLE 0x00000002
#define CRAT_CU_FLAGS_CPU_PRESENT 0x00000004
#define CRAT_CU_FLAGS_GPU_PRESENT 0x00000008
#define CRAT_CU_FLAGS_IOMMU_PRESENT 0x00000010
#define CRAT_CU_FLAGS_RESERVED 0xffffffe0
#define CRAT_COMPUTEUNIT_RESERVED_LENGTH 4
struct crat_subtype_computeunit {
uint8_t type;
uint8_t length;
uint16_t reserved;
uint32_t flags;
uint32_t proximity_domain;
uint32_t processor_id_low;
uint16_t num_cpu_cores;
uint16_t num_simd_cores;
uint16_t max_waves_simd;
uint16_t io_count;
uint16_t hsa_capability;
uint16_t lds_size_in_kb;
uint8_t wave_front_size;
uint8_t num_banks;
uint16_t micro_engine_id;
uint8_t num_arrays;
uint8_t num_cu_per_array;
uint8_t num_simd_per_cu;
uint8_t max_slots_scatch_cu;
uint8_t reserved2[CRAT_COMPUTEUNIT_RESERVED_LENGTH];
};
/*
* HSA Memory Affinity structure and definitions
*/
#define CRAT_MEM_FLAGS_ENABLED 0x00000001
#define CRAT_MEM_FLAGS_HOT_PLUGGABLE 0x00000002
#define CRAT_MEM_FLAGS_NON_VOLATILE 0x00000004
#define CRAT_MEM_FLAGS_RESERVED 0xfffffff8
#define CRAT_MEMORY_RESERVED_LENGTH 8
struct crat_subtype_memory {
uint8_t type;
uint8_t length;
uint16_t reserved;
uint32_t flags;
uint32_t promixity_domain;
uint32_t base_addr_low;
uint32_t base_addr_high;
uint32_t length_low;
uint32_t length_high;
uint32_t width;
uint8_t reserved2[CRAT_MEMORY_RESERVED_LENGTH];
};
/*
* HSA Cache Affinity structure and definitions
*/
#define CRAT_CACHE_FLAGS_ENABLED 0x00000001
#define CRAT_CACHE_FLAGS_DATA_CACHE 0x00000002
#define CRAT_CACHE_FLAGS_INST_CACHE 0x00000004
#define CRAT_CACHE_FLAGS_CPU_CACHE 0x00000008
#define CRAT_CACHE_FLAGS_SIMD_CACHE 0x00000010
#define CRAT_CACHE_FLAGS_RESERVED 0xffffffe0
#define CRAT_CACHE_RESERVED_LENGTH 8
struct crat_subtype_cache {
uint8_t type;
uint8_t length;
uint16_t reserved;
uint32_t flags;
uint32_t processor_id_low;
uint8_t sibling_map[CRAT_SIBLINGMAP_SIZE];
uint32_t cache_size;
uint8_t cache_level;
uint8_t lines_per_tag;
uint16_t cache_line_size;
uint8_t associativity;
uint8_t cache_properties;
uint16_t cache_latency;
uint8_t reserved2[CRAT_CACHE_RESERVED_LENGTH];
};
/*
* HSA TLB Affinity structure and definitions
*/
#define CRAT_TLB_FLAGS_ENABLED 0x00000001
#define CRAT_TLB_FLAGS_DATA_TLB 0x00000002
#define CRAT_TLB_FLAGS_INST_TLB 0x00000004
#define CRAT_TLB_FLAGS_CPU_TLB 0x00000008
#define CRAT_TLB_FLAGS_SIMD_TLB 0x00000010
#define CRAT_TLB_FLAGS_RESERVED 0xffffffe0
#define CRAT_TLB_RESERVED_LENGTH 4
struct crat_subtype_tlb {
uint8_t type;
uint8_t length;
uint16_t reserved;
uint32_t flags;
uint32_t processor_id_low;
uint8_t sibling_map[CRAT_SIBLINGMAP_SIZE];
uint32_t tlb_level;
uint8_t data_tlb_associativity_2mb;
uint8_t data_tlb_size_2mb;
uint8_t instruction_tlb_associativity_2mb;
uint8_t instruction_tlb_size_2mb;
uint8_t data_tlb_associativity_4k;
uint8_t data_tlb_size_4k;
uint8_t instruction_tlb_associativity_4k;
uint8_t instruction_tlb_size_4k;
uint8_t data_tlb_associativity_1gb;
uint8_t data_tlb_size_1gb;
uint8_t instruction_tlb_associativity_1gb;
uint8_t instruction_tlb_size_1gb;
uint8_t reserved2[CRAT_TLB_RESERVED_LENGTH];
};
/*
* HSA CCompute/APU Affinity structure and definitions
*/
#define CRAT_CCOMPUTE_FLAGS_ENABLED 0x00000001
#define CRAT_CCOMPUTE_FLAGS_RESERVED 0xfffffffe
#define CRAT_CCOMPUTE_RESERVED_LENGTH 16
struct crat_subtype_ccompute {
uint8_t type;
uint8_t length;
uint16_t reserved;
uint32_t flags;
uint32_t processor_id_low;
uint8_t sibling_map[CRAT_SIBLINGMAP_SIZE];
uint32_t apu_size;
uint8_t reserved2[CRAT_CCOMPUTE_RESERVED_LENGTH];
};
/*
* HSA IO Link Affinity structure and definitions
*/
#define CRAT_IOLINK_FLAGS_ENABLED 0x00000001
#define CRAT_IOLINK_FLAGS_COHERENCY 0x00000002
#define CRAT_IOLINK_FLAGS_RESERVED 0xfffffffc
/*
* IO interface types
*/
#define CRAT_IOLINK_TYPE_UNDEFINED 0
#define CRAT_IOLINK_TYPE_HYPERTRANSPORT 1
#define CRAT_IOLINK_TYPE_PCIEXPRESS 2
#define CRAT_IOLINK_TYPE_OTHER 3
#define CRAT_IOLINK_TYPE_MAX 255
#define CRAT_IOLINK_RESERVED_LENGTH 24
struct crat_subtype_iolink {
uint8_t type;
uint8_t length;
uint16_t reserved;
uint32_t flags;
uint32_t proximity_domain_from;
uint32_t proximity_domain_to;
uint8_t io_interface_type;
uint8_t version_major;
uint16_t version_minor;
uint32_t minimum_latency;
uint32_t maximum_latency;
uint32_t minimum_bandwidth_mbs;
uint32_t maximum_bandwidth_mbs;
uint32_t recommended_transfer_size;
uint8_t reserved2[CRAT_IOLINK_RESERVED_LENGTH];
};
/*
* HSA generic sub-type header
*/
#define CRAT_SUBTYPE_FLAGS_ENABLED 0x00000001
struct crat_subtype_generic {
uint8_t type;
uint8_t length;
uint16_t reserved;
uint32_t flags;
};
/*
* Component Locality Distance Information Table (CDIT)
*/
#define CDIT_OEMID_LENGTH 6
#define CDIT_OEMTABLEID_LENGTH 8
struct cdit_header {
uint32_t signature;
uint32_t length;
uint8_t revision;
uint8_t checksum;
uint8_t oem_id[CDIT_OEMID_LENGTH];
uint8_t oem_table_id[CDIT_OEMTABLEID_LENGTH];
uint32_t oem_revision;
uint32_t creator_id;
uint32_t creator_revision;
uint32_t total_entries;
uint16_t num_domains;
uint8_t entry[1];
};
#pragma pack()
#endif /* KFD_CRAT_H_INCLUDED */

Просмотреть файл

@ -0,0 +1,308 @@
/*
* Copyright 2014 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*/
#include <linux/amd-iommu.h>
#include <linux/bsearch.h>
#include <linux/pci.h>
#include <linux/slab.h>
#include "kfd_priv.h"
#include "kfd_device_queue_manager.h"
#define MQD_SIZE_ALIGNED 768
static const struct kfd_device_info kaveri_device_info = {
.max_pasid_bits = 16,
.ih_ring_entry_size = 4 * sizeof(uint32_t),
.mqd_size_aligned = MQD_SIZE_ALIGNED
};
struct kfd_deviceid {
unsigned short did;
const struct kfd_device_info *device_info;
};
/* Please keep this sorted by increasing device id. */
static const struct kfd_deviceid supported_devices[] = {
{ 0x1304, &kaveri_device_info }, /* Kaveri */
{ 0x1305, &kaveri_device_info }, /* Kaveri */
{ 0x1306, &kaveri_device_info }, /* Kaveri */
{ 0x1307, &kaveri_device_info }, /* Kaveri */
{ 0x1309, &kaveri_device_info }, /* Kaveri */
{ 0x130A, &kaveri_device_info }, /* Kaveri */
{ 0x130B, &kaveri_device_info }, /* Kaveri */
{ 0x130C, &kaveri_device_info }, /* Kaveri */
{ 0x130D, &kaveri_device_info }, /* Kaveri */
{ 0x130E, &kaveri_device_info }, /* Kaveri */
{ 0x130F, &kaveri_device_info }, /* Kaveri */
{ 0x1310, &kaveri_device_info }, /* Kaveri */
{ 0x1311, &kaveri_device_info }, /* Kaveri */
{ 0x1312, &kaveri_device_info }, /* Kaveri */
{ 0x1313, &kaveri_device_info }, /* Kaveri */
{ 0x1315, &kaveri_device_info }, /* Kaveri */
{ 0x1316, &kaveri_device_info }, /* Kaveri */
{ 0x1317, &kaveri_device_info }, /* Kaveri */
{ 0x1318, &kaveri_device_info }, /* Kaveri */
{ 0x131B, &kaveri_device_info }, /* Kaveri */
{ 0x131C, &kaveri_device_info }, /* Kaveri */
{ 0x131D, &kaveri_device_info }, /* Kaveri */
};
static const struct kfd_device_info *lookup_device_info(unsigned short did)
{
size_t i;
for (i = 0; i < ARRAY_SIZE(supported_devices); i++) {
if (supported_devices[i].did == did) {
BUG_ON(supported_devices[i].device_info == NULL);
return supported_devices[i].device_info;
}
}
return NULL;
}
struct kfd_dev *kgd2kfd_probe(struct kgd_dev *kgd, struct pci_dev *pdev)
{
struct kfd_dev *kfd;
const struct kfd_device_info *device_info =
lookup_device_info(pdev->device);
if (!device_info)
return NULL;
kfd = kzalloc(sizeof(*kfd), GFP_KERNEL);
if (!kfd)
return NULL;
kfd->kgd = kgd;
kfd->device_info = device_info;
kfd->pdev = pdev;
kfd->init_complete = false;
return kfd;
}
static bool device_iommu_pasid_init(struct kfd_dev *kfd)
{
const u32 required_iommu_flags = AMD_IOMMU_DEVICE_FLAG_ATS_SUP |
AMD_IOMMU_DEVICE_FLAG_PRI_SUP |
AMD_IOMMU_DEVICE_FLAG_PASID_SUP;
struct amd_iommu_device_info iommu_info;
unsigned int pasid_limit;
int err;
err = amd_iommu_device_info(kfd->pdev, &iommu_info);
if (err < 0) {
dev_err(kfd_device,
"error getting iommu info. is the iommu enabled?\n");
return false;
}
if ((iommu_info.flags & required_iommu_flags) != required_iommu_flags) {
dev_err(kfd_device, "error required iommu flags ats(%i), pri(%i), pasid(%i)\n",
(iommu_info.flags & AMD_IOMMU_DEVICE_FLAG_ATS_SUP) != 0,
(iommu_info.flags & AMD_IOMMU_DEVICE_FLAG_PRI_SUP) != 0,
(iommu_info.flags & AMD_IOMMU_DEVICE_FLAG_PASID_SUP) != 0);
return false;
}
pasid_limit = min_t(unsigned int,
(unsigned int)1 << kfd->device_info->max_pasid_bits,
iommu_info.max_pasids);
/*
* last pasid is used for kernel queues doorbells
* in the future the last pasid might be used for a kernel thread.
*/
pasid_limit = min_t(unsigned int,
pasid_limit,
kfd->doorbell_process_limit - 1);
err = amd_iommu_init_device(kfd->pdev, pasid_limit);
if (err < 0) {
dev_err(kfd_device, "error initializing iommu device\n");
return false;
}
if (!kfd_set_pasid_limit(pasid_limit)) {
dev_err(kfd_device, "error setting pasid limit\n");
amd_iommu_free_device(kfd->pdev);
return false;
}
return true;
}
static void iommu_pasid_shutdown_callback(struct pci_dev *pdev, int pasid)
{
struct kfd_dev *dev = kfd_device_by_pci_dev(pdev);
if (dev)
kfd_unbind_process_from_device(dev, pasid);
}
bool kgd2kfd_device_init(struct kfd_dev *kfd,
const struct kgd2kfd_shared_resources *gpu_resources)
{
unsigned int size;
kfd->shared_resources = *gpu_resources;
/* calculate max size of mqds needed for queues */
size = max_num_of_processes *
max_num_of_queues_per_process *
kfd->device_info->mqd_size_aligned;
/* add another 512KB for all other allocations on gart */
size += 512 * 1024;
if (kfd2kgd->init_sa_manager(kfd->kgd, size)) {
dev_err(kfd_device,
"Error initializing sa manager for device (%x:%x)\n",
kfd->pdev->vendor, kfd->pdev->device);
goto out;
}
kfd_doorbell_init(kfd);
if (kfd_topology_add_device(kfd) != 0) {
dev_err(kfd_device,
"Error adding device (%x:%x) to topology\n",
kfd->pdev->vendor, kfd->pdev->device);
goto kfd_topology_add_device_error;
}
if (kfd_interrupt_init(kfd)) {
dev_err(kfd_device,
"Error initializing interrupts for device (%x:%x)\n",
kfd->pdev->vendor, kfd->pdev->device);
goto kfd_interrupt_error;
}
if (!device_iommu_pasid_init(kfd)) {
dev_err(kfd_device,
"Error initializing iommuv2 for device (%x:%x)\n",
kfd->pdev->vendor, kfd->pdev->device);
goto device_iommu_pasid_error;
}
amd_iommu_set_invalidate_ctx_cb(kfd->pdev,
iommu_pasid_shutdown_callback);
kfd->dqm = device_queue_manager_init(kfd);
if (!kfd->dqm) {
dev_err(kfd_device,
"Error initializing queue manager for device (%x:%x)\n",
kfd->pdev->vendor, kfd->pdev->device);
goto device_queue_manager_error;
}
if (kfd->dqm->start(kfd->dqm) != 0) {
dev_err(kfd_device,
"Error starting queuen manager for device (%x:%x)\n",
kfd->pdev->vendor, kfd->pdev->device);
goto dqm_start_error;
}
kfd->init_complete = true;
dev_info(kfd_device, "added device (%x:%x)\n", kfd->pdev->vendor,
kfd->pdev->device);
pr_debug("kfd: Starting kfd with the following scheduling policy %d\n",
sched_policy);
goto out;
dqm_start_error:
device_queue_manager_uninit(kfd->dqm);
device_queue_manager_error:
amd_iommu_free_device(kfd->pdev);
device_iommu_pasid_error:
kfd_interrupt_exit(kfd);
kfd_interrupt_error:
kfd_topology_remove_device(kfd);
kfd_topology_add_device_error:
kfd2kgd->fini_sa_manager(kfd->kgd);
dev_err(kfd_device,
"device (%x:%x) NOT added due to errors\n",
kfd->pdev->vendor, kfd->pdev->device);
out:
return kfd->init_complete;
}
void kgd2kfd_device_exit(struct kfd_dev *kfd)
{
if (kfd->init_complete) {
device_queue_manager_uninit(kfd->dqm);
amd_iommu_free_device(kfd->pdev);
kfd_interrupt_exit(kfd);
kfd_topology_remove_device(kfd);
}
kfree(kfd);
}
void kgd2kfd_suspend(struct kfd_dev *kfd)
{
BUG_ON(kfd == NULL);
if (kfd->init_complete) {
kfd->dqm->stop(kfd->dqm);
amd_iommu_set_invalidate_ctx_cb(kfd->pdev, NULL);
amd_iommu_free_device(kfd->pdev);
}
}
int kgd2kfd_resume(struct kfd_dev *kfd)
{
unsigned int pasid_limit;
int err;
BUG_ON(kfd == NULL);
pasid_limit = kfd_get_pasid_limit();
if (kfd->init_complete) {
err = amd_iommu_init_device(kfd->pdev, pasid_limit);
if (err < 0)
return -ENXIO;
amd_iommu_set_invalidate_ctx_cb(kfd->pdev,
iommu_pasid_shutdown_callback);
kfd->dqm->start(kfd->dqm);
}
return 0;
}
/* This is called directly from KGD at ISR. */
void kgd2kfd_interrupt(struct kfd_dev *kfd, const void *ih_ring_entry)
{
if (kfd->init_complete) {
spin_lock(&kfd->interrupt_lock);
if (kfd->interrupts_active
&& enqueue_ih_ring_entry(kfd, ih_ring_entry))
schedule_work(&kfd->interrupt_work);
spin_unlock(&kfd->interrupt_lock);
}
}

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -0,0 +1,146 @@
/*
* Copyright 2014 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
*/
#ifndef KFD_DEVICE_QUEUE_MANAGER_H_
#define KFD_DEVICE_QUEUE_MANAGER_H_
#include <linux/rwsem.h>
#include <linux/list.h>
#include "kfd_priv.h"
#include "kfd_mqd_manager.h"
#define QUEUE_PREEMPT_DEFAULT_TIMEOUT_MS (500)
#define QUEUES_PER_PIPE (8)
#define PIPE_PER_ME_CP_SCHEDULING (3)
#define CIK_VMID_NUM (8)
#define KFD_VMID_START_OFFSET (8)
#define VMID_PER_DEVICE CIK_VMID_NUM
#define KFD_DQM_FIRST_PIPE (0)
struct device_process_node {
struct qcm_process_device *qpd;
struct list_head list;
};
/**
* struct device_queue_manager
*
* @create_queue: Queue creation routine.
*
* @destroy_queue: Queue destruction routine.
*
* @update_queue: Queue update routine.
*
* @get_mqd_manager: Returns the mqd manager according to the mqd type.
*
* @exeute_queues: Dispatches the queues list to the H/W.
*
* @register_process: This routine associates a specific process with device.
*
* @unregister_process: destroys the associations between process to device.
*
* @initialize: Initializes the pipelines and memory module for that device.
*
* @start: Initializes the resources/modules the the device needs for queues
* execution. This function is called on device initialization and after the
* system woke up after suspension.
*
* @stop: This routine stops execution of all the active queue running on the
* H/W and basically this function called on system suspend.
*
* @uninitialize: Destroys all the device queue manager resources allocated in
* initialize routine.
*
* @create_kernel_queue: Creates kernel queue. Used for debug queue.
*
* @destroy_kernel_queue: Destroys kernel queue. Used for debug queue.
*
* @set_cache_memory_policy: Sets memory policy (cached/ non cached) for the
* memory apertures.
*
* This struct is a base class for the kfd queues scheduler in the
* device level. The device base class should expose the basic operations
* for queue creation and queue destruction. This base class hides the
* scheduling mode of the driver and the specific implementation of the
* concrete device. This class is the only class in the queues scheduler
* that configures the H/W.
*/
struct device_queue_manager {
int (*create_queue)(struct device_queue_manager *dqm,
struct queue *q,
struct qcm_process_device *qpd,
int *allocate_vmid);
int (*destroy_queue)(struct device_queue_manager *dqm,
struct qcm_process_device *qpd,
struct queue *q);
int (*update_queue)(struct device_queue_manager *dqm,
struct queue *q);
struct mqd_manager * (*get_mqd_manager)
(struct device_queue_manager *dqm,
enum KFD_MQD_TYPE type);
int (*register_process)(struct device_queue_manager *dqm,
struct qcm_process_device *qpd);
int (*unregister_process)(struct device_queue_manager *dqm,
struct qcm_process_device *qpd);
int (*initialize)(struct device_queue_manager *dqm);
int (*start)(struct device_queue_manager *dqm);
int (*stop)(struct device_queue_manager *dqm);
void (*uninitialize)(struct device_queue_manager *dqm);
int (*create_kernel_queue)(struct device_queue_manager *dqm,
struct kernel_queue *kq,
struct qcm_process_device *qpd);
void (*destroy_kernel_queue)(struct device_queue_manager *dqm,
struct kernel_queue *kq,
struct qcm_process_device *qpd);
bool (*set_cache_memory_policy)(struct device_queue_manager *dqm,
struct qcm_process_device *qpd,
enum cache_policy default_policy,
enum cache_policy alternate_policy,
void __user *alternate_aperture_base,
uint64_t alternate_aperture_size);
struct mqd_manager *mqds[KFD_MQD_TYPE_MAX];
struct packet_manager packets;
struct kfd_dev *dev;
struct mutex lock;
struct list_head queues;
unsigned int processes_count;
unsigned int queue_count;
unsigned int next_pipe_to_allocate;
unsigned int *allocated_queues;
unsigned int vmid_bitmap;
uint64_t pipelines_addr;
struct kfd_mem_obj *pipeline_mem;
uint64_t fence_gpu_addr;
unsigned int *fence_addr;
struct kfd_mem_obj *fence_mem;
bool active_runlist;
};
#endif /* KFD_DEVICE_QUEUE_MANAGER_H_ */

Просмотреть файл

@ -0,0 +1,256 @@
/*
* Copyright 2014 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*/
#include "kfd_priv.h"
#include <linux/mm.h>
#include <linux/mman.h>
#include <linux/slab.h>
#include <linux/io.h>
/*
* This extension supports a kernel level doorbells management for
* the kernel queues.
* Basically the last doorbells page is devoted to kernel queues
* and that's assures that any user process won't get access to the
* kernel doorbells page
*/
static DEFINE_MUTEX(doorbell_mutex);
static unsigned long doorbell_available_index[
DIV_ROUND_UP(KFD_MAX_NUM_OF_QUEUES_PER_PROCESS, BITS_PER_LONG)] = { 0 };
#define KERNEL_DOORBELL_PASID 1
#define KFD_SIZE_OF_DOORBELL_IN_BYTES 4
/*
* Each device exposes a doorbell aperture, a PCI MMIO aperture that
* receives 32-bit writes that are passed to queues as wptr values.
* The doorbells are intended to be written by applications as part
* of queueing work on user-mode queues.
* We assign doorbells to applications in PAGE_SIZE-sized and aligned chunks.
* We map the doorbell address space into user-mode when a process creates
* its first queue on each device.
* Although the mapping is done by KFD, it is equivalent to an mmap of
* the /dev/kfd with the particular device encoded in the mmap offset.
* There will be other uses for mmap of /dev/kfd, so only a range of
* offsets (KFD_MMAP_DOORBELL_START-END) is used for doorbells.
*/
/* # of doorbell bytes allocated for each process. */
static inline size_t doorbell_process_allocation(void)
{
return roundup(KFD_SIZE_OF_DOORBELL_IN_BYTES *
KFD_MAX_NUM_OF_QUEUES_PER_PROCESS,
PAGE_SIZE);
}
/* Doorbell calculations for device init. */
void kfd_doorbell_init(struct kfd_dev *kfd)
{
size_t doorbell_start_offset;
size_t doorbell_aperture_size;
size_t doorbell_process_limit;
/*
* We start with calculations in bytes because the input data might
* only be byte-aligned.
* Only after we have done the rounding can we assume any alignment.
*/
doorbell_start_offset =
roundup(kfd->shared_resources.doorbell_start_offset,
doorbell_process_allocation());
doorbell_aperture_size =
rounddown(kfd->shared_resources.doorbell_aperture_size,
doorbell_process_allocation());
if (doorbell_aperture_size > doorbell_start_offset)
doorbell_process_limit =
(doorbell_aperture_size - doorbell_start_offset) /
doorbell_process_allocation();
else
doorbell_process_limit = 0;
kfd->doorbell_base = kfd->shared_resources.doorbell_physical_address +
doorbell_start_offset;
kfd->doorbell_id_offset = doorbell_start_offset / sizeof(u32);
kfd->doorbell_process_limit = doorbell_process_limit - 1;
kfd->doorbell_kernel_ptr = ioremap(kfd->doorbell_base,
doorbell_process_allocation());
BUG_ON(!kfd->doorbell_kernel_ptr);
pr_debug("kfd: doorbell initialization:\n");
pr_debug("kfd: doorbell base == 0x%08lX\n",
(uintptr_t)kfd->doorbell_base);
pr_debug("kfd: doorbell_id_offset == 0x%08lX\n",
kfd->doorbell_id_offset);
pr_debug("kfd: doorbell_process_limit == 0x%08lX\n",
doorbell_process_limit);
pr_debug("kfd: doorbell_kernel_offset == 0x%08lX\n",
(uintptr_t)kfd->doorbell_base);
pr_debug("kfd: doorbell aperture size == 0x%08lX\n",
kfd->shared_resources.doorbell_aperture_size);
pr_debug("kfd: doorbell kernel address == 0x%08lX\n",
(uintptr_t)kfd->doorbell_kernel_ptr);
}
int kfd_doorbell_mmap(struct kfd_process *process, struct vm_area_struct *vma)
{
phys_addr_t address;
struct kfd_dev *dev;
/*
* For simplicitly we only allow mapping of the entire doorbell
* allocation of a single device & process.
*/
if (vma->vm_end - vma->vm_start != doorbell_process_allocation())
return -EINVAL;
/* Find kfd device according to gpu id */
dev = kfd_device_by_id(vma->vm_pgoff);
if (dev == NULL)
return -EINVAL;
/* Find if pdd exists for combination of process and gpu id */
if (!kfd_get_process_device_data(dev, process, 0))
return -EINVAL;
/* Calculate physical address of doorbell */
address = kfd_get_process_doorbells(dev, process);
vma->vm_flags |= VM_IO | VM_DONTCOPY | VM_DONTEXPAND | VM_NORESERVE |
VM_DONTDUMP | VM_PFNMAP;
vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
pr_debug("kfd: mapping doorbell page in kfd_doorbell_mmap\n"
" target user address == 0x%08llX\n"
" physical address == 0x%08llX\n"
" vm_flags == 0x%04lX\n"
" size == 0x%04lX\n",
(unsigned long long) vma->vm_start, address, vma->vm_flags,
doorbell_process_allocation());
return io_remap_pfn_range(vma,
vma->vm_start,
address >> PAGE_SHIFT,
doorbell_process_allocation(),
vma->vm_page_prot);
}
/* get kernel iomem pointer for a doorbell */
u32 __iomem *kfd_get_kernel_doorbell(struct kfd_dev *kfd,
unsigned int *doorbell_off)
{
u32 inx;
BUG_ON(!kfd || !doorbell_off);
mutex_lock(&doorbell_mutex);
inx = find_first_zero_bit(doorbell_available_index,
KFD_MAX_NUM_OF_QUEUES_PER_PROCESS);
__set_bit(inx, doorbell_available_index);
mutex_unlock(&doorbell_mutex);
if (inx >= KFD_MAX_NUM_OF_QUEUES_PER_PROCESS)
return NULL;
/*
* Calculating the kernel doorbell offset using "faked" kernel
* pasid that allocated for kernel queues only
*/
*doorbell_off = KERNEL_DOORBELL_PASID * (doorbell_process_allocation() /
sizeof(u32)) + inx;
pr_debug("kfd: get kernel queue doorbell\n"
" doorbell offset == 0x%08d\n"
" kernel address == 0x%08lX\n",
*doorbell_off, (uintptr_t)(kfd->doorbell_kernel_ptr + inx));
return kfd->doorbell_kernel_ptr + inx;
}
void kfd_release_kernel_doorbell(struct kfd_dev *kfd, u32 __iomem *db_addr)
{
unsigned int inx;
BUG_ON(!kfd || !db_addr);
inx = (unsigned int)(db_addr - kfd->doorbell_kernel_ptr);
mutex_lock(&doorbell_mutex);
__clear_bit(inx, doorbell_available_index);
mutex_unlock(&doorbell_mutex);
}
inline void write_kernel_doorbell(u32 __iomem *db, u32 value)
{
if (db) {
writel(value, db);
pr_debug("writing %d to doorbell address 0x%p\n", value, db);
}
}
/*
* queue_ids are in the range [0,MAX_PROCESS_QUEUES) and are mapped 1:1
* to doorbells with the process's doorbell page
*/
unsigned int kfd_queue_id_to_doorbell(struct kfd_dev *kfd,
struct kfd_process *process,
unsigned int queue_id)
{
/*
* doorbell_id_offset accounts for doorbells taken by KGD.
* pasid * doorbell_process_allocation/sizeof(u32) adjusts
* to the process's doorbells
*/
return kfd->doorbell_id_offset +
process->pasid * (doorbell_process_allocation()/sizeof(u32)) +
queue_id;
}
uint64_t kfd_get_number_elems(struct kfd_dev *kfd)
{
uint64_t num_of_elems = (kfd->shared_resources.doorbell_aperture_size -
kfd->shared_resources.doorbell_start_offset) /
doorbell_process_allocation() + 1;
return num_of_elems;
}
phys_addr_t kfd_get_process_doorbells(struct kfd_dev *dev,
struct kfd_process *process)
{
return dev->doorbell_base +
process->pasid * doorbell_process_allocation();
}

Просмотреть файл

@ -0,0 +1,356 @@
/*
* Copyright 2014 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
*/
#include <linux/device.h>
#include <linux/export.h>
#include <linux/err.h>
#include <linux/fs.h>
#include <linux/sched.h>
#include <linux/slab.h>
#include <linux/uaccess.h>
#include <linux/compat.h>
#include <uapi/linux/kfd_ioctl.h>
#include <linux/time.h>
#include "kfd_priv.h"
#include <linux/mm.h>
#include <uapi/asm-generic/mman-common.h>
#include <asm/processor.h>
/*
* The primary memory I/O features being added for revisions of gfxip
* beyond 7.0 (Kaveri) are:
*
* Access to ATC/IOMMU mapped memory w/ associated extension of VA to 48b
*
* Flat shader memory access These are new shader vector memory
* operations that do not reference a T#/V# so a pointer is what is
* sourced from the vector gprs for direct access to memory.
* This pointer space has the Shared(LDS) and Private(Scratch) memory
* mapped into this pointer space as apertures.
* The hardware then determines how to direct the memory request
* based on what apertures the request falls in.
*
* Unaligned support and alignment check
*
*
* System Unified Address - SUA
*
* The standard usage for GPU virtual addresses are that they are mapped by
* a set of page tables we call GPUVM and these page tables are managed by
* a combination of vidMM/driver software components. The current virtual
* address (VA) range for GPUVM is 40b.
*
* As of gfxip7.1 and beyond were adding the ability for compute memory
* clients (CP/RLC, DMA, SHADER(ifetch, scalar, and vector ops)) to access
* the same page tables used by host x86 processors and that are managed by
* the operating system. This is via a technique and hardware called ATC/IOMMU.
* The GPU has the capability of accessing both the GPUVM and ATC address
* spaces for a given VMID (process) simultaneously and we call this feature
* system unified address (SUA).
*
* There are three fundamental address modes of operation for a given VMID
* (process) on the GPU:
*
* HSA64 64b pointers and the default address space is ATC
* HSA32 32b pointers and the default address space is ATC
* GPUVM 64b pointers and the default address space is GPUVM (driver
* model mode)
*
*
* HSA64 - ATC/IOMMU 64b
*
* A 64b pointer in the AMD64/IA64 CPU architecture is not fully utilized
* by the CPU so an AMD CPU can only access the high area
* (VA[63:47] == 0x1FFFF) and low area (VA[63:47 == 0) of the address space
* so the actual VA carried to translation is 48b. There is a hole in
* the middle of the 64b VA space.
*
* The GPU not only has access to all of the CPU accessible address space via
* ATC/IOMMU, but it also has access to the GPUVM address space. The system
* unified address feature (SUA) is the mapping of GPUVM and ATC address
* spaces into a unified pointer space. The method we take for 64b mode is
* to map the full 40b GPUVM address space into the hole of the 64b address
* space.
* The GPUVM_Base/GPUVM_Limit defines the aperture in the 64b space where we
* direct requests to be translated via GPUVM page tables instead of the
* IOMMU path.
*
*
* 64b to 49b Address conversion
*
* Note that there are still significant portions of unused regions (holes)
* in the 64b address space even for the GPU. There are several places in
* the pipeline (sw and hw), we wish to compress the 64b virtual address
* to a 49b address. This 49b address is constituted of an ATC bit
* plus a 48b virtual address. This 49b address is what is passed to the
* translation hardware. ATC==0 means the 48b address is a GPUVM address
* (max of 2^40 1) intended to be translated via GPUVM page tables.
* ATC==1 means the 48b address is intended to be translated via IOMMU
* page tables.
*
* A 64b pointer is compared to the apertures that are defined (Base/Limit), in
* this case the GPUVM aperture (red) is defined and if a pointer falls in this
* aperture, we subtract the GPUVM_Base address and set the ATC bit to zero
* as part of the 64b to 49b conversion.
*
* Where this 64b to 49b conversion is done is a function of the usage.
* Most GPU memory access is via memory objects where the driver builds
* a descriptor which consists of a base address and a memory access by
* the GPU usually consists of some kind of an offset or Cartesian coordinate
* that references this memory descriptor. This is the case for shader
* instructions that reference the T# or V# constants, or for specified
* locations of assets (ex. the shader program location). In these cases
* the driver is what handles the 64b to 49b conversion and the base
* address in the descriptor (ex. V# or T# or shader program location)
* is defined as a 48b address w/ an ATC bit. For this usage a given
* memory object cannot straddle multiple apertures in the 64b address
* space. For example a shader program cannot jump in/out between ATC
* and GPUVM space.
*
* In some cases we wish to pass a 64b pointer to the GPU hardware and
* the GPU hw does the 64b to 49b conversion before passing memory
* requests to the cache/memory system. This is the case for the
* S_LOAD and FLAT_* shader memory instructions where we have 64b pointers
* in scalar and vector GPRs respectively.
*
* In all cases (no matter where the 64b -> 49b conversion is done), the gfxip
* hardware sends a 48b address along w/ an ATC bit, to the memory controller
* on the memory request interfaces.
*
* <client>_MC_rdreq_atc // read request ATC bit
*
* 0 : <client>_MC_rdreq_addr is a GPUVM VA
*
* 1 : <client>_MC_rdreq_addr is a ATC VA
*
*
* Spare aperture (APE1)
*
* We use the GPUVM aperture to differentiate ATC vs. GPUVM, but we also use
* apertures to set the Mtype field for S_LOAD/FLAT_* ops which is input to the
* config tables for setting cache policies. The spare (APE1) aperture is
* motivated by getting a different Mtype from the default.
* The default aperture isnt an actual base/limit aperture; it is just the
* address space that doesnt hit any defined base/limit apertures.
* The following diagram is a complete picture of the gfxip7.x SUA apertures.
* The APE1 can be placed either below or above
* the hole (cannot be in the hole).
*
*
* General Aperture definitions and rules
*
* An aperture register definition consists of a Base, Limit, Mtype, and
* usually an ATC bit indicating which translation tables that aperture uses.
* In all cases (for SUA and DUA apertures discussed later), aperture base
* and limit definitions are 64KB aligned.
*
* <ape>_Base[63:0] = { <ape>_Base_register[63:16], 0x0000 }
*
* <ape>_Limit[63:0] = { <ape>_Limit_register[63:16], 0xFFFF }
*
* The base and limit are considered inclusive to an aperture so being
* inside an aperture means (address >= Base) AND (address <= Limit).
*
* In no case is a payload that straddles multiple apertures expected to work.
* For example a load_dword_x4 that starts in one aperture and ends in another,
* does not work. For the vector FLAT_* ops we have detection capability in
* the shader for reporting a memory violation back to the
* SQ block for use in traps.
* A memory violation results when an op falls into the hole,
* or a payload straddles multiple apertures. The S_LOAD instruction
* does not have this detection.
*
* Apertures cannot overlap.
*
*
*
* HSA32 - ATC/IOMMU 32b
*
* For HSA32 mode, the pointers are interpreted as 32 bits and use a single GPR
* instead of two for the S_LOAD and FLAT_* ops. The entire GPUVM space of 40b
* will not fit so there is only partial visibility to the GPUVM
* space (defined by the aperture) for S_LOAD and FLAT_* ops.
* There is no spare (APE1) aperture for HSA32 mode.
*
*
* GPUVM 64b mode (driver model)
*
* This mode is related to HSA64 in that the difference really is that
* the default aperture is GPUVM (ATC==0) and not ATC space.
* We have gfxip7.x hardware that has FLAT_* and S_LOAD support for
* SUA GPUVM mode, but does not support HSA32/HSA64.
*
*
* Device Unified Address - DUA
*
* Device unified address (DUA) is the name of the feature that maps the
* Shared(LDS) memory and Private(Scratch) memory into the overall address
* space for use by the new FLAT_* vector memory ops. The Shared and
* Private memories are mapped as apertures into the address space,
* and the hardware detects when a FLAT_* memory request is to be redirected
* to the LDS or Scratch memory when it falls into one of these apertures.
* Like the SUA apertures, the Shared/Private apertures are 64KB aligned and
* the base/limit is in the aperture. For both HSA64 and GPUVM SUA modes,
* the Shared/Private apertures are always placed in a limited selection of
* options in the hole of the 64b address space. For HSA32 mode, the
* Shared/Private apertures can be placed anywhere in the 32b space
* except at 0.
*
*
* HSA64 Apertures for FLAT_* vector ops
*
* For HSA64 SUA mode, the Shared and Private apertures are always placed
* in the hole w/ a limited selection of possible locations. The requests
* that fall in the private aperture are expanded as a function of the
* work-item id (tid) and redirected to the location of the
* hidden private memory. The hidden private can be placed in either GPUVM
* or ATC space. The addresses that fall in the shared aperture are
* re-directed to the on-chip LDS memory hardware.
*
*
* HSA32 Apertures for FLAT_* vector ops
*
* In HSA32 mode, the Private and Shared apertures can be placed anywhere
* in the 32b space except at 0 (Private or Shared Base at zero disables
* the apertures). If the base address of the apertures are non-zero
* (ie apertures exists), the size is always 64KB.
*
*
* GPUVM Apertures for FLAT_* vector ops
*
* In GPUVM mode, the Shared/Private apertures are specified identically
* to HSA64 mode where they are always in the hole at a limited selection
* of locations.
*
*
* Aperture Definitions for SUA and DUA
*
* The interpretation of the aperture register definitions for a given
* VMID is a function of the SUA Mode which is one of HSA64, HSA32, or
* GPUVM64 discussed in previous sections. The mode is first decoded, and
* then the remaining register decode is a function of the mode.
*
*
* SUA Mode Decode
*
* For the S_LOAD and FLAT_* shader operations, the SUA mode is decoded from
* the COMPUTE_DISPATCH_INITIATOR:DATA_ATC bit and
* the SH_MEM_CONFIG:PTR32 bits.
*
* COMPUTE_DISPATCH_INITIATOR:DATA_ATC SH_MEM_CONFIG:PTR32 Mode
*
* 1 0 HSA64
*
* 1 1 HSA32
*
* 0 X GPUVM64
*
* In general the hardware will ignore the PTR32 bit and treat
* as 0 whenever DATA_ATC = 0, but sw should set PTR32=0
* when DATA_ATC=0.
*
* The DATA_ATC bit is only set for compute dispatches.
* All Draw dispatches are hardcoded to GPUVM64 mode
* for FLAT_* / S_LOAD operations.
*/
#define MAKE_GPUVM_APP_BASE(gpu_num) \
(((uint64_t)(gpu_num) << 61) + 0x1000000000000L)
#define MAKE_GPUVM_APP_LIMIT(base) \
(((uint64_t)(base) & \
0xFFFFFF0000000000UL) | 0xFFFFFFFFFFL)
#define MAKE_SCRATCH_APP_BASE(gpu_num) \
(((uint64_t)(gpu_num) << 61) + 0x100000000L)
#define MAKE_SCRATCH_APP_LIMIT(base) \
(((uint64_t)base & 0xFFFFFFFF00000000UL) | 0xFFFFFFFF)
#define MAKE_LDS_APP_BASE(gpu_num) \
(((uint64_t)(gpu_num) << 61) + 0x0)
#define MAKE_LDS_APP_LIMIT(base) \
(((uint64_t)(base) & 0xFFFFFFFF00000000UL) | 0xFFFFFFFF)
int kfd_init_apertures(struct kfd_process *process)
{
uint8_t id = 0;
struct kfd_dev *dev;
struct kfd_process_device *pdd;
mutex_lock(&process->mutex);
/*Iterating over all devices*/
while ((dev = kfd_topology_enum_kfd_devices(id)) != NULL &&
id < NUM_OF_SUPPORTED_GPUS) {
pdd = kfd_get_process_device_data(dev, process, 1);
/*
* For 64 bit process aperture will be statically reserved in
* the x86_64 non canonical process address space
* amdkfd doesn't currently support apertures for 32 bit process
*/
if (process->is_32bit_user_mode) {
pdd->lds_base = pdd->lds_limit = 0;
pdd->gpuvm_base = pdd->gpuvm_limit = 0;
pdd->scratch_base = pdd->scratch_limit = 0;
} else {
/*
* node id couldn't be 0 - the three MSB bits of
* aperture shoudn't be 0
*/
pdd->lds_base = MAKE_LDS_APP_BASE(id + 1);
pdd->lds_limit = MAKE_LDS_APP_LIMIT(pdd->lds_base);
pdd->gpuvm_base = MAKE_GPUVM_APP_BASE(id + 1);
pdd->gpuvm_limit =
MAKE_GPUVM_APP_LIMIT(pdd->gpuvm_base);
pdd->scratch_base = MAKE_SCRATCH_APP_BASE(id + 1);
pdd->scratch_limit =
MAKE_SCRATCH_APP_LIMIT(pdd->scratch_base);
}
dev_dbg(kfd_device, "node id %u\n", id);
dev_dbg(kfd_device, "gpu id %u\n", pdd->dev->id);
dev_dbg(kfd_device, "lds_base %llX\n", pdd->lds_base);
dev_dbg(kfd_device, "lds_limit %llX\n", pdd->lds_limit);
dev_dbg(kfd_device, "gpuvm_base %llX\n", pdd->gpuvm_base);
dev_dbg(kfd_device, "gpuvm_limit %llX\n", pdd->gpuvm_limit);
dev_dbg(kfd_device, "scratch_base %llX\n", pdd->scratch_base);
dev_dbg(kfd_device, "scratch_limit %llX\n", pdd->scratch_limit);
id++;
}
mutex_unlock(&process->mutex);
return 0;
}

Просмотреть файл

@ -0,0 +1,176 @@
/*
* Copyright 2014 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*/
/*
* KFD Interrupts.
*
* AMD GPUs deliver interrupts by pushing an interrupt description onto the
* interrupt ring and then sending an interrupt. KGD receives the interrupt
* in ISR and sends us a pointer to each new entry on the interrupt ring.
*
* We generally can't process interrupt-signaled events from ISR, so we call
* out to each interrupt client module (currently only the scheduler) to ask if
* each interrupt is interesting. If they return true, then it requires further
* processing so we copy it to an internal interrupt ring and call each
* interrupt client again from a work-queue.
*
* There's no acknowledgment for the interrupts we use. The hardware simply
* queues a new interrupt each time without waiting.
*
* The fixed-size internal queue means that it's possible for us to lose
* interrupts because we have no back-pressure to the hardware.
*/
#include <linux/slab.h>
#include <linux/device.h>
#include "kfd_priv.h"
#define KFD_INTERRUPT_RING_SIZE 256
static void interrupt_wq(struct work_struct *);
int kfd_interrupt_init(struct kfd_dev *kfd)
{
void *interrupt_ring = kmalloc_array(KFD_INTERRUPT_RING_SIZE,
kfd->device_info->ih_ring_entry_size,
GFP_KERNEL);
if (!interrupt_ring)
return -ENOMEM;
kfd->interrupt_ring = interrupt_ring;
kfd->interrupt_ring_size =
KFD_INTERRUPT_RING_SIZE * kfd->device_info->ih_ring_entry_size;
atomic_set(&kfd->interrupt_ring_wptr, 0);
atomic_set(&kfd->interrupt_ring_rptr, 0);
spin_lock_init(&kfd->interrupt_lock);
INIT_WORK(&kfd->interrupt_work, interrupt_wq);
kfd->interrupts_active = true;
/*
* After this function returns, the interrupt will be enabled. This
* barrier ensures that the interrupt running on a different processor
* sees all the above writes.
*/
smp_wmb();
return 0;
}
void kfd_interrupt_exit(struct kfd_dev *kfd)
{
/*
* Stop the interrupt handler from writing to the ring and scheduling
* workqueue items. The spinlock ensures that any interrupt running
* after we have unlocked sees interrupts_active = false.
*/
unsigned long flags;
spin_lock_irqsave(&kfd->interrupt_lock, flags);
kfd->interrupts_active = false;
spin_unlock_irqrestore(&kfd->interrupt_lock, flags);
/*
* Flush_scheduled_work ensures that there are no outstanding
* work-queue items that will access interrupt_ring. New work items
* can't be created because we stopped interrupt handling above.
*/
flush_scheduled_work();
kfree(kfd->interrupt_ring);
}
/*
* This assumes that it can't be called concurrently with itself
* but only with dequeue_ih_ring_entry.
*/
bool enqueue_ih_ring_entry(struct kfd_dev *kfd, const void *ih_ring_entry)
{
unsigned int rptr = atomic_read(&kfd->interrupt_ring_rptr);
unsigned int wptr = atomic_read(&kfd->interrupt_ring_wptr);
if ((rptr - wptr) % kfd->interrupt_ring_size ==
kfd->device_info->ih_ring_entry_size) {
/* This is very bad, the system is likely to hang. */
dev_err_ratelimited(kfd_chardev(),
"Interrupt ring overflow, dropping interrupt.\n");
return false;
}
memcpy(kfd->interrupt_ring + wptr, ih_ring_entry,
kfd->device_info->ih_ring_entry_size);
wptr = (wptr + kfd->device_info->ih_ring_entry_size) %
kfd->interrupt_ring_size;
smp_wmb(); /* Ensure memcpy'd data is visible before wptr update. */
atomic_set(&kfd->interrupt_ring_wptr, wptr);
return true;
}
/*
* This assumes that it can't be called concurrently with itself
* but only with enqueue_ih_ring_entry.
*/
static bool dequeue_ih_ring_entry(struct kfd_dev *kfd, void *ih_ring_entry)
{
/*
* Assume that wait queues have an implicit barrier, i.e. anything that
* happened in the ISR before it queued work is visible.
*/
unsigned int wptr = atomic_read(&kfd->interrupt_ring_wptr);
unsigned int rptr = atomic_read(&kfd->interrupt_ring_rptr);
if (rptr == wptr)
return false;
memcpy(ih_ring_entry, kfd->interrupt_ring + rptr,
kfd->device_info->ih_ring_entry_size);
rptr = (rptr + kfd->device_info->ih_ring_entry_size) %
kfd->interrupt_ring_size;
/*
* Ensure the rptr write update is not visible until
* memcpy has finished reading.
*/
smp_mb();
atomic_set(&kfd->interrupt_ring_rptr, rptr);
return true;
}
static void interrupt_wq(struct work_struct *work)
{
struct kfd_dev *dev = container_of(work, struct kfd_dev,
interrupt_work);
uint32_t ih_ring_entry[DIV_ROUND_UP(
dev->device_info->ih_ring_entry_size,
sizeof(uint32_t))];
while (dequeue_ih_ring_entry(dev, ih_ring_entry))
;
}

Просмотреть файл

@ -0,0 +1,353 @@
/*
* Copyright 2014 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
*/
#include <linux/types.h>
#include <linux/mutex.h>
#include <linux/slab.h>
#include <linux/printk.h>
#include <linux/sched.h>
#include "kfd_kernel_queue.h"
#include "kfd_priv.h"
#include "kfd_device_queue_manager.h"
#include "kfd_pm4_headers.h"
#include "kfd_pm4_opcodes.h"
#define PM4_COUNT_ZERO (((1 << 15) - 1) << 16)
static bool initialize(struct kernel_queue *kq, struct kfd_dev *dev,
enum kfd_queue_type type, unsigned int queue_size)
{
struct queue_properties prop;
int retval;
union PM4_MES_TYPE_3_HEADER nop;
BUG_ON(!kq || !dev);
BUG_ON(type != KFD_QUEUE_TYPE_DIQ && type != KFD_QUEUE_TYPE_HIQ);
pr_debug("kfd: In func %s initializing queue type %d size %d\n",
__func__, KFD_QUEUE_TYPE_HIQ, queue_size);
nop.opcode = IT_NOP;
nop.type = PM4_TYPE_3;
nop.u32all |= PM4_COUNT_ZERO;
kq->dev = dev;
kq->nop_packet = nop.u32all;
switch (type) {
case KFD_QUEUE_TYPE_DIQ:
case KFD_QUEUE_TYPE_HIQ:
kq->mqd = dev->dqm->get_mqd_manager(dev->dqm,
KFD_MQD_TYPE_CIK_HIQ);
break;
default:
BUG();
break;
}
if (kq->mqd == NULL)
return false;
prop.doorbell_ptr = kfd_get_kernel_doorbell(dev, &prop.doorbell_off);
if (prop.doorbell_ptr == NULL)
goto err_get_kernel_doorbell;
retval = kfd2kgd->allocate_mem(dev->kgd,
queue_size,
PAGE_SIZE,
KFD_MEMPOOL_SYSTEM_WRITECOMBINE,
(struct kgd_mem **) &kq->pq);
if (retval != 0)
goto err_pq_allocate_vidmem;
kq->pq_kernel_addr = kq->pq->cpu_ptr;
kq->pq_gpu_addr = kq->pq->gpu_addr;
retval = kfd2kgd->allocate_mem(dev->kgd,
sizeof(*kq->rptr_kernel),
32,
KFD_MEMPOOL_SYSTEM_WRITECOMBINE,
(struct kgd_mem **) &kq->rptr_mem);
if (retval != 0)
goto err_rptr_allocate_vidmem;
kq->rptr_kernel = kq->rptr_mem->cpu_ptr;
kq->rptr_gpu_addr = kq->rptr_mem->gpu_addr;
retval = kfd2kgd->allocate_mem(dev->kgd,
sizeof(*kq->wptr_kernel),
32,
KFD_MEMPOOL_SYSTEM_WRITECOMBINE,
(struct kgd_mem **) &kq->wptr_mem);
if (retval != 0)
goto err_wptr_allocate_vidmem;
kq->wptr_kernel = kq->wptr_mem->cpu_ptr;
kq->wptr_gpu_addr = kq->wptr_mem->gpu_addr;
memset(kq->pq_kernel_addr, 0, queue_size);
memset(kq->rptr_kernel, 0, sizeof(*kq->rptr_kernel));
memset(kq->wptr_kernel, 0, sizeof(*kq->wptr_kernel));
prop.queue_size = queue_size;
prop.is_interop = false;
prop.priority = 1;
prop.queue_percent = 100;
prop.type = type;
prop.vmid = 0;
prop.queue_address = kq->pq_gpu_addr;
prop.read_ptr = (uint32_t *) kq->rptr_gpu_addr;
prop.write_ptr = (uint32_t *) kq->wptr_gpu_addr;
if (init_queue(&kq->queue, prop) != 0)
goto err_init_queue;
kq->queue->device = dev;
kq->queue->process = kfd_get_process(current);
retval = kq->mqd->init_mqd(kq->mqd, &kq->queue->mqd,
&kq->queue->mqd_mem_obj,
&kq->queue->gart_mqd_addr,
&kq->queue->properties);
if (retval != 0)
goto err_init_mqd;
/* assign HIQ to HQD */
if (type == KFD_QUEUE_TYPE_HIQ) {
pr_debug("assigning hiq to hqd\n");
kq->queue->pipe = KFD_CIK_HIQ_PIPE;
kq->queue->queue = KFD_CIK_HIQ_QUEUE;
kq->mqd->load_mqd(kq->mqd, kq->queue->mqd, kq->queue->pipe,
kq->queue->queue, NULL);
} else {
/* allocate fence for DIQ */
retval = kfd2kgd->allocate_mem(dev->kgd,
sizeof(uint32_t),
32,
KFD_MEMPOOL_SYSTEM_WRITECOMBINE,
(struct kgd_mem **) &kq->fence_mem_obj);
if (retval != 0)
goto err_alloc_fence;
kq->fence_kernel_address = kq->fence_mem_obj->cpu_ptr;
kq->fence_gpu_addr = kq->fence_mem_obj->gpu_addr;
}
print_queue(kq->queue);
return true;
err_alloc_fence:
err_init_mqd:
uninit_queue(kq->queue);
err_init_queue:
kfd2kgd->free_mem(dev->kgd, (struct kgd_mem *) kq->wptr_mem);
err_wptr_allocate_vidmem:
kfd2kgd->free_mem(dev->kgd, (struct kgd_mem *) kq->rptr_mem);
err_rptr_allocate_vidmem:
kfd2kgd->free_mem(dev->kgd, (struct kgd_mem *) kq->pq);
err_pq_allocate_vidmem:
pr_err("kfd: error init pq\n");
kfd_release_kernel_doorbell(dev, prop.doorbell_ptr);
err_get_kernel_doorbell:
pr_err("kfd: error init doorbell");
return false;
}
static void uninitialize(struct kernel_queue *kq)
{
BUG_ON(!kq);
if (kq->queue->properties.type == KFD_QUEUE_TYPE_HIQ)
kq->mqd->destroy_mqd(kq->mqd,
NULL,
false,
QUEUE_PREEMPT_DEFAULT_TIMEOUT_MS,
kq->queue->pipe,
kq->queue->queue);
kfd2kgd->free_mem(kq->dev->kgd, (struct kgd_mem *) kq->rptr_mem);
kfd2kgd->free_mem(kq->dev->kgd, (struct kgd_mem *) kq->wptr_mem);
kfd2kgd->free_mem(kq->dev->kgd, (struct kgd_mem *) kq->pq);
kfd_release_kernel_doorbell(kq->dev,
kq->queue->properties.doorbell_ptr);
uninit_queue(kq->queue);
}
static int acquire_packet_buffer(struct kernel_queue *kq,
size_t packet_size_in_dwords, unsigned int **buffer_ptr)
{
size_t available_size;
size_t queue_size_dwords;
uint32_t wptr, rptr;
unsigned int *queue_address;
BUG_ON(!kq || !buffer_ptr);
rptr = *kq->rptr_kernel;
wptr = *kq->wptr_kernel;
queue_address = (unsigned int *)kq->pq_kernel_addr;
queue_size_dwords = kq->queue->properties.queue_size / sizeof(uint32_t);
pr_debug("kfd: In func %s\nrptr: %d\nwptr: %d\nqueue_address 0x%p\n",
__func__, rptr, wptr, queue_address);
available_size = (rptr - 1 - wptr + queue_size_dwords) %
queue_size_dwords;
if (packet_size_in_dwords >= queue_size_dwords ||
packet_size_in_dwords >= available_size) {
/*
* make sure calling functions know
* acquire_packet_buffer() failed
*/
*buffer_ptr = NULL;
return -ENOMEM;
}
if (wptr + packet_size_in_dwords >= queue_size_dwords) {
while (wptr > 0) {
queue_address[wptr] = kq->nop_packet;
wptr = (wptr + 1) % queue_size_dwords;
}
}
*buffer_ptr = &queue_address[wptr];
kq->pending_wptr = wptr + packet_size_in_dwords;
return 0;
}
static void submit_packet(struct kernel_queue *kq)
{
#ifdef DEBUG
int i;
#endif
BUG_ON(!kq);
#ifdef DEBUG
for (i = *kq->wptr_kernel; i < kq->pending_wptr; i++) {
pr_debug("0x%2X ", kq->pq_kernel_addr[i]);
if (i % 15 == 0)
pr_debug("\n");
}
pr_debug("\n");
#endif
*kq->wptr_kernel = kq->pending_wptr;
write_kernel_doorbell(kq->queue->properties.doorbell_ptr,
kq->pending_wptr);
}
static int sync_with_hw(struct kernel_queue *kq, unsigned long timeout_ms)
{
unsigned long org_timeout_ms;
BUG_ON(!kq);
org_timeout_ms = timeout_ms;
timeout_ms += jiffies * 1000 / HZ;
while (*kq->wptr_kernel != *kq->rptr_kernel) {
if (time_after(jiffies * 1000 / HZ, timeout_ms)) {
pr_err("kfd: kernel_queue %s timeout expired %lu\n",
__func__, org_timeout_ms);
pr_err("kfd: wptr: %d rptr: %d\n",
*kq->wptr_kernel, *kq->rptr_kernel);
return -ETIME;
}
schedule();
}
return 0;
}
static void rollback_packet(struct kernel_queue *kq)
{
BUG_ON(!kq);
kq->pending_wptr = *kq->queue->properties.write_ptr;
}
struct kernel_queue *kernel_queue_init(struct kfd_dev *dev,
enum kfd_queue_type type)
{
struct kernel_queue *kq;
BUG_ON(!dev);
kq = kzalloc(sizeof(struct kernel_queue), GFP_KERNEL);
if (!kq)
return NULL;
kq->initialize = initialize;
kq->uninitialize = uninitialize;
kq->acquire_packet_buffer = acquire_packet_buffer;
kq->submit_packet = submit_packet;
kq->sync_with_hw = sync_with_hw;
kq->rollback_packet = rollback_packet;
if (kq->initialize(kq, dev, type, KFD_KERNEL_QUEUE_SIZE) == false) {
pr_err("kfd: failed to init kernel queue\n");
kfree(kq);
return NULL;
}
return kq;
}
void kernel_queue_uninit(struct kernel_queue *kq)
{
BUG_ON(!kq);
kq->uninitialize(kq);
kfree(kq);
}
static __attribute__((unused)) void test_kq(struct kfd_dev *dev)
{
struct kernel_queue *kq;
uint32_t *buffer, i;
int retval;
BUG_ON(!dev);
pr_debug("kfd: starting kernel queue test\n");
kq = kernel_queue_init(dev, KFD_QUEUE_TYPE_HIQ);
BUG_ON(!kq);
retval = kq->acquire_packet_buffer(kq, 5, &buffer);
BUG_ON(retval != 0);
for (i = 0; i < 5; i++)
buffer[i] = kq->nop_packet;
kq->submit_packet(kq);
kq->sync_with_hw(kq, 1000);
pr_debug("kfd: ending kernel queue test\n");
}

Просмотреть файл

@ -0,0 +1,69 @@
/*
* Copyright 2014 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
*/
#ifndef KFD_KERNEL_QUEUE_H_
#define KFD_KERNEL_QUEUE_H_
#include <linux/list.h>
#include <linux/types.h>
#include "kfd_priv.h"
struct kernel_queue {
/* interface */
bool (*initialize)(struct kernel_queue *kq, struct kfd_dev *dev,
enum kfd_queue_type type, unsigned int queue_size);
void (*uninitialize)(struct kernel_queue *kq);
int (*acquire_packet_buffer)(struct kernel_queue *kq,
size_t packet_size_in_dwords,
unsigned int **buffer_ptr);
void (*submit_packet)(struct kernel_queue *kq);
int (*sync_with_hw)(struct kernel_queue *kq,
unsigned long timeout_ms);
void (*rollback_packet)(struct kernel_queue *kq);
/* data */
struct kfd_dev *dev;
struct mqd_manager *mqd;
struct queue *queue;
uint32_t pending_wptr;
unsigned int nop_packet;
struct kfd_mem_obj *rptr_mem;
uint32_t *rptr_kernel;
uint64_t rptr_gpu_addr;
struct kfd_mem_obj *wptr_mem;
uint32_t *wptr_kernel;
uint64_t wptr_gpu_addr;
struct kfd_mem_obj *pq;
uint64_t pq_gpu_addr;
uint32_t *pq_kernel_addr;
struct kfd_mem_obj *fence_mem_obj;
uint64_t fence_gpu_addr;
void *fence_kernel_address;
struct list_head list;
};
#endif /* KFD_KERNEL_QUEUE_H_ */

Просмотреть файл

@ -0,0 +1,159 @@
/*
* Copyright 2014 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*/
#include <linux/module.h>
#include <linux/sched.h>
#include <linux/moduleparam.h>
#include <linux/device.h>
#include "kfd_priv.h"
#define KFD_DRIVER_AUTHOR "AMD Inc. and others"
#define KFD_DRIVER_DESC "Standalone HSA driver for AMD's GPUs"
#define KFD_DRIVER_DATE "20141113"
#define KFD_DRIVER_MAJOR 0
#define KFD_DRIVER_MINOR 7
#define KFD_DRIVER_PATCHLEVEL 0
const struct kfd2kgd_calls *kfd2kgd;
static const struct kgd2kfd_calls kgd2kfd = {
.exit = kgd2kfd_exit,
.probe = kgd2kfd_probe,
.device_init = kgd2kfd_device_init,
.device_exit = kgd2kfd_device_exit,
.interrupt = kgd2kfd_interrupt,
.suspend = kgd2kfd_suspend,
.resume = kgd2kfd_resume,
};
int sched_policy = KFD_SCHED_POLICY_HWS;
module_param(sched_policy, int, 0444);
MODULE_PARM_DESC(sched_policy,
"Kernel cmdline parameter that defines the amdkfd scheduling policy");
int max_num_of_processes = KFD_MAX_NUM_OF_PROCESSES_DEFAULT;
module_param(max_num_of_processes, int, 0444);
MODULE_PARM_DESC(max_num_of_processes,
"Kernel cmdline parameter that defines the amdkfd maximum number of supported processes");
int max_num_of_queues_per_process = KFD_MAX_NUM_OF_QUEUES_PER_PROCESS_DEFAULT;
module_param(max_num_of_queues_per_process, int, 0444);
MODULE_PARM_DESC(max_num_of_queues_per_process,
"Kernel cmdline parameter that defines the amdkfd maximum number of supported queues per process");
bool kgd2kfd_init(unsigned interface_version,
const struct kfd2kgd_calls *f2g,
const struct kgd2kfd_calls **g2f)
{
/*
* Only one interface version is supported,
* no kfd/kgd version skew allowed.
*/
if (interface_version != KFD_INTERFACE_VERSION)
return false;
/* Protection against multiple amd kgd loads */
if (kfd2kgd)
return true;
kfd2kgd = f2g;
*g2f = &kgd2kfd;
return true;
}
EXPORT_SYMBOL(kgd2kfd_init);
void kgd2kfd_exit(void)
{
}
static int __init kfd_module_init(void)
{
int err;
kfd2kgd = NULL;
/* Verify module parameters */
if ((sched_policy < KFD_SCHED_POLICY_HWS) ||
(sched_policy > KFD_SCHED_POLICY_NO_HWS)) {
pr_err("kfd: sched_policy has invalid value\n");
return -1;
}
/* Verify module parameters */
if ((max_num_of_processes < 0) ||
(max_num_of_processes > KFD_MAX_NUM_OF_PROCESSES)) {
pr_err("kfd: max_num_of_processes must be between 0 to KFD_MAX_NUM_OF_PROCESSES\n");
return -1;
}
if ((max_num_of_queues_per_process < 0) ||
(max_num_of_queues_per_process >
KFD_MAX_NUM_OF_QUEUES_PER_PROCESS)) {
pr_err("kfd: max_num_of_queues_per_process must be between 0 to KFD_MAX_NUM_OF_QUEUES_PER_PROCESS\n");
return -1;
}
err = kfd_pasid_init();
if (err < 0)
goto err_pasid;
err = kfd_chardev_init();
if (err < 0)
goto err_ioctl;
err = kfd_topology_init();
if (err < 0)
goto err_topology;
kfd_process_create_wq();
dev_info(kfd_device, "Initialized module\n");
return 0;
err_topology:
kfd_chardev_exit();
err_ioctl:
kfd_pasid_exit();
err_pasid:
return err;
}
static void __exit kfd_module_exit(void)
{
kfd_process_destroy_wq();
kfd_topology_shutdown();
kfd_chardev_exit();
kfd_pasid_exit();
dev_info(kfd_device, "Removed module\n");
}
module_init(kfd_module_init);
module_exit(kfd_module_exit);
MODULE_AUTHOR(KFD_DRIVER_AUTHOR);
MODULE_DESCRIPTION(KFD_DRIVER_DESC);
MODULE_LICENSE("GPL and additional rights");
MODULE_VERSION(__stringify(KFD_DRIVER_MAJOR) "."
__stringify(KFD_DRIVER_MINOR) "."
__stringify(KFD_DRIVER_PATCHLEVEL));

Просмотреть файл

@ -0,0 +1,346 @@
/*
* Copyright 2014 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
*/
#include <linux/printk.h>
#include <linux/slab.h>
#include "kfd_priv.h"
#include "kfd_mqd_manager.h"
#include "cik_regs.h"
#include "../../radeon/cik_reg.h"
inline void busy_wait(unsigned long ms)
{
while (time_before(jiffies, ms))
cpu_relax();
}
static inline struct cik_mqd *get_mqd(void *mqd)
{
return (struct cik_mqd *)mqd;
}
static int init_mqd(struct mqd_manager *mm, void **mqd,
struct kfd_mem_obj **mqd_mem_obj, uint64_t *gart_addr,
struct queue_properties *q)
{
uint64_t addr;
struct cik_mqd *m;
int retval;
BUG_ON(!mm || !q || !mqd);
pr_debug("kfd: In func %s\n", __func__);
retval = kfd2kgd->allocate_mem(mm->dev->kgd,
sizeof(struct cik_mqd),
256,
KFD_MEMPOOL_SYSTEM_WRITECOMBINE,
(struct kgd_mem **) mqd_mem_obj);
if (retval != 0)
return -ENOMEM;
m = (struct cik_mqd *) (*mqd_mem_obj)->cpu_ptr;
addr = (*mqd_mem_obj)->gpu_addr;
memset(m, 0, ALIGN(sizeof(struct cik_mqd), 256));
m->header = 0xC0310800;
m->compute_pipelinestat_enable = 1;
m->compute_static_thread_mgmt_se0 = 0xFFFFFFFF;
m->compute_static_thread_mgmt_se1 = 0xFFFFFFFF;
m->compute_static_thread_mgmt_se2 = 0xFFFFFFFF;
m->compute_static_thread_mgmt_se3 = 0xFFFFFFFF;
/*
* Make sure to use the last queue state saved on mqd when the cp
* reassigns the queue, so when queue is switched on/off (e.g over
* subscription or quantum timeout) the context will be consistent
*/
m->cp_hqd_persistent_state =
DEFAULT_CP_HQD_PERSISTENT_STATE | PRELOAD_REQ;
m->cp_mqd_control = MQD_CONTROL_PRIV_STATE_EN;
m->cp_mqd_base_addr_lo = lower_32_bits(addr);
m->cp_mqd_base_addr_hi = upper_32_bits(addr);
m->cp_hqd_ib_control = DEFAULT_MIN_IB_AVAIL_SIZE | IB_ATC_EN;
/* Although WinKFD writes this, I suspect it should not be necessary */
m->cp_hqd_ib_control = IB_ATC_EN | DEFAULT_MIN_IB_AVAIL_SIZE;
m->cp_hqd_quantum = QUANTUM_EN | QUANTUM_SCALE_1MS |
QUANTUM_DURATION(10);
/*
* Pipe Priority
* Identifies the pipe relative priority when this queue is connected
* to the pipeline. The pipe priority is against the GFX pipe and HP3D.
* In KFD we are using a fixed pipe priority set to CS_MEDIUM.
* 0 = CS_LOW (typically below GFX)
* 1 = CS_MEDIUM (typically between HP3D and GFX
* 2 = CS_HIGH (typically above HP3D)
*/
m->cp_hqd_pipe_priority = 1;
m->cp_hqd_queue_priority = 15;
*mqd = m;
if (gart_addr != NULL)
*gart_addr = addr;
retval = mm->update_mqd(mm, m, q);
return retval;
}
static void uninit_mqd(struct mqd_manager *mm, void *mqd,
struct kfd_mem_obj *mqd_mem_obj)
{
BUG_ON(!mm || !mqd);
kfd2kgd->free_mem(mm->dev->kgd, (struct kgd_mem *) mqd_mem_obj);
}
static int load_mqd(struct mqd_manager *mm, void *mqd, uint32_t pipe_id,
uint32_t queue_id, uint32_t __user *wptr)
{
return kfd2kgd->hqd_load(mm->dev->kgd, mqd, pipe_id, queue_id, wptr);
}
static int update_mqd(struct mqd_manager *mm, void *mqd,
struct queue_properties *q)
{
struct cik_mqd *m;
BUG_ON(!mm || !q || !mqd);
pr_debug("kfd: In func %s\n", __func__);
m = get_mqd(mqd);
m->cp_hqd_pq_control = DEFAULT_RPTR_BLOCK_SIZE |
DEFAULT_MIN_AVAIL_SIZE | PQ_ATC_EN;
/*
* Calculating queue size which is log base 2 of actual queue size -1
* dwords and another -1 for ffs
*/
m->cp_hqd_pq_control |= ffs(q->queue_size / sizeof(unsigned int))
- 1 - 1;
m->cp_hqd_pq_base_lo = lower_32_bits((uint64_t)q->queue_address >> 8);
m->cp_hqd_pq_base_hi = upper_32_bits((uint64_t)q->queue_address >> 8);
m->cp_hqd_pq_rptr_report_addr_lo = lower_32_bits((uint64_t)q->read_ptr);
m->cp_hqd_pq_rptr_report_addr_hi = upper_32_bits((uint64_t)q->read_ptr);
m->cp_hqd_pq_doorbell_control = DOORBELL_EN |
DOORBELL_OFFSET(q->doorbell_off);
m->cp_hqd_vmid = q->vmid;
if (q->format == KFD_QUEUE_FORMAT_AQL) {
m->cp_hqd_iq_rptr = AQL_ENABLE;
m->cp_hqd_pq_control |= NO_UPDATE_RPTR;
}
m->cp_hqd_active = 0;
q->is_active = false;
if (q->queue_size > 0 &&
q->queue_address != 0 &&
q->queue_percent > 0) {
m->cp_hqd_active = 1;
q->is_active = true;
}
return 0;
}
static int destroy_mqd(struct mqd_manager *mm, void *mqd,
enum kfd_preempt_type type,
unsigned int timeout, uint32_t pipe_id,
uint32_t queue_id)
{
return kfd2kgd->hqd_destroy(mm->dev->kgd, type, timeout,
pipe_id, queue_id);
}
static bool is_occupied(struct mqd_manager *mm, void *mqd,
uint64_t queue_address, uint32_t pipe_id,
uint32_t queue_id)
{
return kfd2kgd->hqd_is_occupies(mm->dev->kgd, queue_address,
pipe_id, queue_id);
}
/*
* HIQ MQD Implementation, concrete implementation for HIQ MQD implementation.
* The HIQ queue in Kaveri is using the same MQD structure as all the user mode
* queues but with different initial values.
*/
static int init_mqd_hiq(struct mqd_manager *mm, void **mqd,
struct kfd_mem_obj **mqd_mem_obj, uint64_t *gart_addr,
struct queue_properties *q)
{
uint64_t addr;
struct cik_mqd *m;
int retval;
BUG_ON(!mm || !q || !mqd || !mqd_mem_obj);
pr_debug("kfd: In func %s\n", __func__);
retval = kfd2kgd->allocate_mem(mm->dev->kgd,
sizeof(struct cik_mqd),
256,
KFD_MEMPOOL_SYSTEM_WRITECOMBINE,
(struct kgd_mem **) mqd_mem_obj);
if (retval != 0)
return -ENOMEM;
m = (struct cik_mqd *) (*mqd_mem_obj)->cpu_ptr;
addr = (*mqd_mem_obj)->gpu_addr;
memset(m, 0, ALIGN(sizeof(struct cik_mqd), 256));
m->header = 0xC0310800;
m->compute_pipelinestat_enable = 1;
m->compute_static_thread_mgmt_se0 = 0xFFFFFFFF;
m->compute_static_thread_mgmt_se1 = 0xFFFFFFFF;
m->compute_static_thread_mgmt_se2 = 0xFFFFFFFF;
m->compute_static_thread_mgmt_se3 = 0xFFFFFFFF;
m->cp_hqd_persistent_state = DEFAULT_CP_HQD_PERSISTENT_STATE |
PRELOAD_REQ;
m->cp_hqd_quantum = QUANTUM_EN | QUANTUM_SCALE_1MS |
QUANTUM_DURATION(10);
m->cp_mqd_control = MQD_CONTROL_PRIV_STATE_EN;
m->cp_mqd_base_addr_lo = lower_32_bits(addr);
m->cp_mqd_base_addr_hi = upper_32_bits(addr);
m->cp_hqd_ib_control = DEFAULT_MIN_IB_AVAIL_SIZE;
/*
* Pipe Priority
* Identifies the pipe relative priority when this queue is connected
* to the pipeline. The pipe priority is against the GFX pipe and HP3D.
* In KFD we are using a fixed pipe priority set to CS_MEDIUM.
* 0 = CS_LOW (typically below GFX)
* 1 = CS_MEDIUM (typically between HP3D and GFX
* 2 = CS_HIGH (typically above HP3D)
*/
m->cp_hqd_pipe_priority = 1;
m->cp_hqd_queue_priority = 15;
*mqd = m;
if (gart_addr)
*gart_addr = addr;
retval = mm->update_mqd(mm, m, q);
return retval;
}
static int update_mqd_hiq(struct mqd_manager *mm, void *mqd,
struct queue_properties *q)
{
struct cik_mqd *m;
BUG_ON(!mm || !q || !mqd);
pr_debug("kfd: In func %s\n", __func__);
m = get_mqd(mqd);
m->cp_hqd_pq_control = DEFAULT_RPTR_BLOCK_SIZE |
DEFAULT_MIN_AVAIL_SIZE |
PRIV_STATE |
KMD_QUEUE;
/*
* Calculating queue size which is log base 2 of actual queue
* size -1 dwords
*/
m->cp_hqd_pq_control |= ffs(q->queue_size / sizeof(unsigned int))
- 1 - 1;
m->cp_hqd_pq_base_lo = lower_32_bits((uint64_t)q->queue_address >> 8);
m->cp_hqd_pq_base_hi = upper_32_bits((uint64_t)q->queue_address >> 8);
m->cp_hqd_pq_rptr_report_addr_lo = lower_32_bits((uint64_t)q->read_ptr);
m->cp_hqd_pq_rptr_report_addr_hi = upper_32_bits((uint64_t)q->read_ptr);
m->cp_hqd_pq_doorbell_control = DOORBELL_EN |
DOORBELL_OFFSET(q->doorbell_off);
m->cp_hqd_vmid = q->vmid;
m->cp_hqd_active = 0;
q->is_active = false;
if (q->queue_size > 0 &&
q->queue_address != 0 &&
q->queue_percent > 0) {
m->cp_hqd_active = 1;
q->is_active = true;
}
return 0;
}
struct mqd_manager *mqd_manager_init(enum KFD_MQD_TYPE type,
struct kfd_dev *dev)
{
struct mqd_manager *mqd;
BUG_ON(!dev);
BUG_ON(type >= KFD_MQD_TYPE_MAX);
pr_debug("kfd: In func %s\n", __func__);
mqd = kzalloc(sizeof(struct mqd_manager), GFP_KERNEL);
if (!mqd)
return NULL;
mqd->dev = dev;
switch (type) {
case KFD_MQD_TYPE_CIK_CP:
case KFD_MQD_TYPE_CIK_COMPUTE:
mqd->init_mqd = init_mqd;
mqd->uninit_mqd = uninit_mqd;
mqd->load_mqd = load_mqd;
mqd->update_mqd = update_mqd;
mqd->destroy_mqd = destroy_mqd;
mqd->is_occupied = is_occupied;
break;
case KFD_MQD_TYPE_CIK_HIQ:
mqd->init_mqd = init_mqd_hiq;
mqd->uninit_mqd = uninit_mqd;
mqd->load_mqd = load_mqd;
mqd->update_mqd = update_mqd_hiq;
mqd->destroy_mqd = destroy_mqd;
mqd->is_occupied = is_occupied;
break;
default:
kfree(mqd);
return NULL;
}
return mqd;
}
/* SDMA queues should be implemented here when the cp will supports them */

Просмотреть файл

@ -0,0 +1,91 @@
/*
* Copyright 2014 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
*/
#ifndef KFD_MQD_MANAGER_H_
#define KFD_MQD_MANAGER_H_
#include "kfd_priv.h"
/**
* struct mqd_manager
*
* @init_mqd: Allocates the mqd buffer on local gpu memory and initialize it.
*
* @load_mqd: Loads the mqd to a concrete hqd slot. Used only for no cp
* scheduling mode.
*
* @update_mqd: Handles a update call for the MQD
*
* @destroy_mqd: Destroys the HQD slot and by that preempt the relevant queue.
* Used only for no cp scheduling.
*
* @uninit_mqd: Releases the mqd buffer from local gpu memory.
*
* @is_occupied: Checks if the relevant HQD slot is occupied.
*
* @mqd_mutex: Mqd manager mutex.
*
* @dev: The kfd device structure coupled with this module.
*
* MQD stands for Memory Queue Descriptor which represents the current queue
* state in the memory and initiate the HQD (Hardware Queue Descriptor) state.
* This structure is actually a base class for the different types of MQDs
* structures for the variant ASICs that should be supported in the future.
* This base class is also contains all the MQD specific operations.
* Another important thing to mention is that each queue has a MQD that keeps
* his state (or context) after each preemption or reassignment.
* Basically there are a instances of the mqd manager class per MQD type per
* ASIC. Currently the kfd driver supports only Kaveri so there are instances
* per KFD_MQD_TYPE for each device.
*
*/
struct mqd_manager {
int (*init_mqd)(struct mqd_manager *mm, void **mqd,
struct kfd_mem_obj **mqd_mem_obj, uint64_t *gart_addr,
struct queue_properties *q);
int (*load_mqd)(struct mqd_manager *mm, void *mqd,
uint32_t pipe_id, uint32_t queue_id,
uint32_t __user *wptr);
int (*update_mqd)(struct mqd_manager *mm, void *mqd,
struct queue_properties *q);
int (*destroy_mqd)(struct mqd_manager *mm, void *mqd,
enum kfd_preempt_type type,
unsigned int timeout, uint32_t pipe_id,
uint32_t queue_id);
void (*uninit_mqd)(struct mqd_manager *mm, void *mqd,
struct kfd_mem_obj *mqd_mem_obj);
bool (*is_occupied)(struct mqd_manager *mm, void *mqd,
uint64_t queue_address, uint32_t pipe_id,
uint32_t queue_id);
struct mutex mqd_mutex;
struct kfd_dev *dev;
};
#endif /* KFD_MQD_MANAGER_H_ */

Просмотреть файл

@ -0,0 +1,565 @@
/*
* Copyright 2014 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
*/
#include <linux/slab.h>
#include <linux/mutex.h>
#include "kfd_device_queue_manager.h"
#include "kfd_kernel_queue.h"
#include "kfd_priv.h"
#include "kfd_pm4_headers.h"
#include "kfd_pm4_opcodes.h"
static inline void inc_wptr(unsigned int *wptr, unsigned int increment_bytes,
unsigned int buffer_size_bytes)
{
unsigned int temp = *wptr + increment_bytes / sizeof(uint32_t);
BUG_ON((temp * sizeof(uint32_t)) > buffer_size_bytes);
*wptr = temp;
}
static unsigned int build_pm4_header(unsigned int opcode, size_t packet_size)
{
union PM4_MES_TYPE_3_HEADER header;
header.u32all = 0;
header.opcode = opcode;
header.count = packet_size/sizeof(uint32_t) - 2;
header.type = PM4_TYPE_3;
return header.u32all;
}
static void pm_calc_rlib_size(struct packet_manager *pm,
unsigned int *rlib_size,
bool *over_subscription)
{
unsigned int process_count, queue_count;
BUG_ON(!pm || !rlib_size || !over_subscription);
process_count = pm->dqm->processes_count;
queue_count = pm->dqm->queue_count;
/* check if there is over subscription*/
*over_subscription = false;
if ((process_count > 1) ||
queue_count > PIPE_PER_ME_CP_SCHEDULING * QUEUES_PER_PIPE) {
*over_subscription = true;
pr_debug("kfd: over subscribed runlist\n");
}
/* calculate run list ib allocation size */
*rlib_size = process_count * sizeof(struct pm4_map_process) +
queue_count * sizeof(struct pm4_map_queues);
/*
* Increase the allocation size in case we need a chained run list
* when over subscription
*/
if (*over_subscription)
*rlib_size += sizeof(struct pm4_runlist);
pr_debug("kfd: runlist ib size %d\n", *rlib_size);
}
static int pm_allocate_runlist_ib(struct packet_manager *pm,
unsigned int **rl_buffer,
uint64_t *rl_gpu_buffer,
unsigned int *rl_buffer_size,
bool *is_over_subscription)
{
int retval;
BUG_ON(!pm);
BUG_ON(pm->allocated == true);
BUG_ON(is_over_subscription == NULL);
pm_calc_rlib_size(pm, rl_buffer_size, is_over_subscription);
retval = kfd2kgd->allocate_mem(pm->dqm->dev->kgd,
*rl_buffer_size,
PAGE_SIZE,
KFD_MEMPOOL_SYSTEM_WRITECOMBINE,
(struct kgd_mem **) &pm->ib_buffer_obj);
if (retval != 0) {
pr_err("kfd: failed to allocate runlist IB\n");
return retval;
}
*(void **)rl_buffer = pm->ib_buffer_obj->cpu_ptr;
*rl_gpu_buffer = pm->ib_buffer_obj->gpu_addr;
memset(*rl_buffer, 0, *rl_buffer_size);
pm->allocated = true;
return retval;
}
static int pm_create_runlist(struct packet_manager *pm, uint32_t *buffer,
uint64_t ib, size_t ib_size_in_dwords, bool chain)
{
struct pm4_runlist *packet;
BUG_ON(!pm || !buffer || !ib);
packet = (struct pm4_runlist *)buffer;
memset(buffer, 0, sizeof(struct pm4_runlist));
packet->header.u32all = build_pm4_header(IT_RUN_LIST,
sizeof(struct pm4_runlist));
packet->bitfields4.ib_size = ib_size_in_dwords;
packet->bitfields4.chain = chain ? 1 : 0;
packet->bitfields4.offload_polling = 0;
packet->bitfields4.valid = 1;
packet->ordinal2 = lower_32_bits(ib);
packet->bitfields3.ib_base_hi = upper_32_bits(ib);
return 0;
}
static int pm_create_map_process(struct packet_manager *pm, uint32_t *buffer,
struct qcm_process_device *qpd)
{
struct pm4_map_process *packet;
struct queue *cur;
uint32_t num_queues;
BUG_ON(!pm || !buffer || !qpd);
packet = (struct pm4_map_process *)buffer;
pr_debug("kfd: In func %s\n", __func__);
memset(buffer, 0, sizeof(struct pm4_map_process));
packet->header.u32all = build_pm4_header(IT_MAP_PROCESS,
sizeof(struct pm4_map_process));
packet->bitfields2.diq_enable = (qpd->is_debug) ? 1 : 0;
packet->bitfields2.process_quantum = 1;
packet->bitfields2.pasid = qpd->pqm->process->pasid;
packet->bitfields3.page_table_base = qpd->page_table_base;
packet->bitfields10.gds_size = qpd->gds_size;
packet->bitfields10.num_gws = qpd->num_gws;
packet->bitfields10.num_oac = qpd->num_oac;
num_queues = 0;
list_for_each_entry(cur, &qpd->queues_list, list)
num_queues++;
packet->bitfields10.num_queues = num_queues;
packet->sh_mem_config = qpd->sh_mem_config;
packet->sh_mem_bases = qpd->sh_mem_bases;
packet->sh_mem_ape1_base = qpd->sh_mem_ape1_base;
packet->sh_mem_ape1_limit = qpd->sh_mem_ape1_limit;
packet->gds_addr_lo = lower_32_bits(qpd->gds_context_area);
packet->gds_addr_hi = upper_32_bits(qpd->gds_context_area);
return 0;
}
static int pm_create_map_queue(struct packet_manager *pm, uint32_t *buffer,
struct queue *q)
{
struct pm4_map_queues *packet;
BUG_ON(!pm || !buffer || !q);
pr_debug("kfd: In func %s\n", __func__);
packet = (struct pm4_map_queues *)buffer;
memset(buffer, 0, sizeof(struct pm4_map_queues));
packet->header.u32all = build_pm4_header(IT_MAP_QUEUES,
sizeof(struct pm4_map_queues));
packet->bitfields2.alloc_format =
alloc_format__mes_map_queues__one_per_pipe;
packet->bitfields2.num_queues = 1;
packet->bitfields2.queue_sel =
queue_sel__mes_map_queues__map_to_hws_determined_queue_slots;
packet->bitfields2.vidmem = (q->properties.is_interop) ?
vidmem__mes_map_queues__uses_video_memory :
vidmem__mes_map_queues__uses_no_video_memory;
switch (q->properties.type) {
case KFD_QUEUE_TYPE_COMPUTE:
case KFD_QUEUE_TYPE_DIQ:
packet->bitfields2.engine_sel =
engine_sel__mes_map_queues__compute;
break;
case KFD_QUEUE_TYPE_SDMA:
packet->bitfields2.engine_sel =
engine_sel__mes_map_queues__sdma0;
break;
default:
BUG();
break;
}
packet->mes_map_queues_ordinals[0].bitfields3.doorbell_offset =
q->properties.doorbell_off;
packet->mes_map_queues_ordinals[0].mqd_addr_lo =
lower_32_bits(q->gart_mqd_addr);
packet->mes_map_queues_ordinals[0].mqd_addr_hi =
upper_32_bits(q->gart_mqd_addr);
packet->mes_map_queues_ordinals[0].wptr_addr_lo =
lower_32_bits((uint64_t)q->properties.write_ptr);
packet->mes_map_queues_ordinals[0].wptr_addr_hi =
upper_32_bits((uint64_t)q->properties.write_ptr);
return 0;
}
static int pm_create_runlist_ib(struct packet_manager *pm,
struct list_head *queues,
uint64_t *rl_gpu_addr,
size_t *rl_size_bytes)
{
unsigned int alloc_size_bytes;
unsigned int *rl_buffer, rl_wptr, i;
int retval, proccesses_mapped;
struct device_process_node *cur;
struct qcm_process_device *qpd;
struct queue *q;
struct kernel_queue *kq;
bool is_over_subscription;
BUG_ON(!pm || !queues || !rl_size_bytes || !rl_gpu_addr);
rl_wptr = retval = proccesses_mapped = 0;
retval = pm_allocate_runlist_ib(pm, &rl_buffer, rl_gpu_addr,
&alloc_size_bytes, &is_over_subscription);
if (retval != 0)
return retval;
*rl_size_bytes = alloc_size_bytes;
pr_debug("kfd: In func %s\n", __func__);
pr_debug("kfd: building runlist ib process count: %d queues count %d\n",
pm->dqm->processes_count, pm->dqm->queue_count);
/* build the run list ib packet */
list_for_each_entry(cur, queues, list) {
qpd = cur->qpd;
/* build map process packet */
if (proccesses_mapped >= pm->dqm->processes_count) {
pr_debug("kfd: not enough space left in runlist IB\n");
pm_release_ib(pm);
return -ENOMEM;
}
retval = pm_create_map_process(pm, &rl_buffer[rl_wptr], qpd);
if (retval != 0)
return retval;
proccesses_mapped++;
inc_wptr(&rl_wptr, sizeof(struct pm4_map_process),
alloc_size_bytes);
list_for_each_entry(kq, &qpd->priv_queue_list, list) {
if (kq->queue->properties.is_active != true)
continue;
retval = pm_create_map_queue(pm, &rl_buffer[rl_wptr],
kq->queue);
if (retval != 0)
return retval;
inc_wptr(&rl_wptr, sizeof(struct pm4_map_queues),
alloc_size_bytes);
}
list_for_each_entry(q, &qpd->queues_list, list) {
if (q->properties.is_active != true)
continue;
retval = pm_create_map_queue(pm,
&rl_buffer[rl_wptr], q);
if (retval != 0)
return retval;
inc_wptr(&rl_wptr, sizeof(struct pm4_map_queues),
alloc_size_bytes);
}
}
pr_debug("kfd: finished map process and queues to runlist\n");
if (is_over_subscription)
pm_create_runlist(pm, &rl_buffer[rl_wptr], *rl_gpu_addr,
alloc_size_bytes / sizeof(uint32_t), true);
for (i = 0; i < alloc_size_bytes / sizeof(uint32_t); i++)
pr_debug("0x%2X ", rl_buffer[i]);
pr_debug("\n");
return 0;
}
int pm_init(struct packet_manager *pm, struct device_queue_manager *dqm)
{
BUG_ON(!dqm);
pm->dqm = dqm;
mutex_init(&pm->lock);
pm->priv_queue = kernel_queue_init(dqm->dev, KFD_QUEUE_TYPE_HIQ);
if (pm->priv_queue == NULL) {
mutex_destroy(&pm->lock);
return -ENOMEM;
}
pm->allocated = false;
return 0;
}
void pm_uninit(struct packet_manager *pm)
{
BUG_ON(!pm);
mutex_destroy(&pm->lock);
kernel_queue_uninit(pm->priv_queue);
}
int pm_send_set_resources(struct packet_manager *pm,
struct scheduling_resources *res)
{
struct pm4_set_resources *packet;
BUG_ON(!pm || !res);
pr_debug("kfd: In func %s\n", __func__);
mutex_lock(&pm->lock);
pm->priv_queue->acquire_packet_buffer(pm->priv_queue,
sizeof(*packet) / sizeof(uint32_t),
(unsigned int **)&packet);
if (packet == NULL) {
mutex_unlock(&pm->lock);
pr_err("kfd: failed to allocate buffer on kernel queue\n");
return -ENOMEM;
}
memset(packet, 0, sizeof(struct pm4_set_resources));
packet->header.u32all = build_pm4_header(IT_SET_RESOURCES,
sizeof(struct pm4_set_resources));
packet->bitfields2.queue_type =
queue_type__mes_set_resources__hsa_interface_queue_hiq;
packet->bitfields2.vmid_mask = res->vmid_mask;
packet->bitfields2.unmap_latency = KFD_UNMAP_LATENCY;
packet->bitfields7.oac_mask = res->oac_mask;
packet->bitfields8.gds_heap_base = res->gds_heap_base;
packet->bitfields8.gds_heap_size = res->gds_heap_size;
packet->gws_mask_lo = lower_32_bits(res->gws_mask);
packet->gws_mask_hi = upper_32_bits(res->gws_mask);
packet->queue_mask_lo = lower_32_bits(res->queue_mask);
packet->queue_mask_hi = upper_32_bits(res->queue_mask);
pm->priv_queue->submit_packet(pm->priv_queue);
pm->priv_queue->sync_with_hw(pm->priv_queue, KFD_HIQ_TIMEOUT);
mutex_unlock(&pm->lock);
return 0;
}
int pm_send_runlist(struct packet_manager *pm, struct list_head *dqm_queues)
{
uint64_t rl_gpu_ib_addr;
uint32_t *rl_buffer;
size_t rl_ib_size, packet_size_dwords;
int retval;
BUG_ON(!pm || !dqm_queues);
retval = pm_create_runlist_ib(pm, dqm_queues, &rl_gpu_ib_addr,
&rl_ib_size);
if (retval != 0)
goto fail_create_runlist_ib;
pr_debug("kfd: runlist IB address: 0x%llX\n", rl_gpu_ib_addr);
packet_size_dwords = sizeof(struct pm4_runlist) / sizeof(uint32_t);
mutex_lock(&pm->lock);
retval = pm->priv_queue->acquire_packet_buffer(pm->priv_queue,
packet_size_dwords, &rl_buffer);
if (retval != 0)
goto fail_acquire_packet_buffer;
retval = pm_create_runlist(pm, rl_buffer, rl_gpu_ib_addr,
rl_ib_size / sizeof(uint32_t), false);
if (retval != 0)
goto fail_create_runlist;
pm->priv_queue->submit_packet(pm->priv_queue);
pm->priv_queue->sync_with_hw(pm->priv_queue, KFD_HIQ_TIMEOUT);
mutex_unlock(&pm->lock);
return retval;
fail_create_runlist:
pm->priv_queue->rollback_packet(pm->priv_queue);
fail_acquire_packet_buffer:
mutex_unlock(&pm->lock);
fail_create_runlist_ib:
if (pm->allocated == true)
pm_release_ib(pm);
return retval;
}
int pm_send_query_status(struct packet_manager *pm, uint64_t fence_address,
uint32_t fence_value)
{
int retval;
struct pm4_query_status *packet;
BUG_ON(!pm || !fence_address);
mutex_lock(&pm->lock);
retval = pm->priv_queue->acquire_packet_buffer(
pm->priv_queue,
sizeof(struct pm4_query_status) / sizeof(uint32_t),
(unsigned int **)&packet);
if (retval != 0)
goto fail_acquire_packet_buffer;
packet->header.u32all = build_pm4_header(IT_QUERY_STATUS,
sizeof(struct pm4_query_status));
packet->bitfields2.context_id = 0;
packet->bitfields2.interrupt_sel =
interrupt_sel__mes_query_status__completion_status;
packet->bitfields2.command =
command__mes_query_status__fence_only_after_write_ack;
packet->addr_hi = upper_32_bits((uint64_t)fence_address);
packet->addr_lo = lower_32_bits((uint64_t)fence_address);
packet->data_hi = upper_32_bits((uint64_t)fence_value);
packet->data_lo = lower_32_bits((uint64_t)fence_value);
pm->priv_queue->submit_packet(pm->priv_queue);
pm->priv_queue->sync_with_hw(pm->priv_queue, KFD_HIQ_TIMEOUT);
mutex_unlock(&pm->lock);
return 0;
fail_acquire_packet_buffer:
mutex_unlock(&pm->lock);
return retval;
}
int pm_send_unmap_queue(struct packet_manager *pm, enum kfd_queue_type type,
enum kfd_preempt_type_filter mode,
uint32_t filter_param, bool reset,
unsigned int sdma_engine)
{
int retval;
uint32_t *buffer;
struct pm4_unmap_queues *packet;
BUG_ON(!pm);
mutex_lock(&pm->lock);
retval = pm->priv_queue->acquire_packet_buffer(
pm->priv_queue,
sizeof(struct pm4_unmap_queues) / sizeof(uint32_t),
&buffer);
if (retval != 0)
goto err_acquire_packet_buffer;
packet = (struct pm4_unmap_queues *)buffer;
memset(buffer, 0, sizeof(struct pm4_unmap_queues));
packet->header.u32all = build_pm4_header(IT_UNMAP_QUEUES,
sizeof(struct pm4_unmap_queues));
switch (type) {
case KFD_QUEUE_TYPE_COMPUTE:
case KFD_QUEUE_TYPE_DIQ:
packet->bitfields2.engine_sel =
engine_sel__mes_unmap_queues__compute;
break;
case KFD_QUEUE_TYPE_SDMA:
packet->bitfields2.engine_sel =
engine_sel__mes_unmap_queues__sdma0 + sdma_engine;
break;
default:
BUG();
break;
}
if (reset)
packet->bitfields2.action =
action__mes_unmap_queues__reset_queues;
else
packet->bitfields2.action =
action__mes_unmap_queues__preempt_queues;
switch (mode) {
case KFD_PREEMPT_TYPE_FILTER_SINGLE_QUEUE:
packet->bitfields2.queue_sel =
queue_sel__mes_unmap_queues__perform_request_on_specified_queues;
packet->bitfields2.num_queues = 1;
packet->bitfields3b.doorbell_offset0 = filter_param;
break;
case KFD_PREEMPT_TYPE_FILTER_BY_PASID:
packet->bitfields2.queue_sel =
queue_sel__mes_unmap_queues__perform_request_on_pasid_queues;
packet->bitfields3a.pasid = filter_param;
break;
case KFD_PREEMPT_TYPE_FILTER_ALL_QUEUES:
packet->bitfields2.queue_sel =
queue_sel__mes_unmap_queues__perform_request_on_all_active_queues;
break;
default:
BUG();
break;
};
pm->priv_queue->submit_packet(pm->priv_queue);
pm->priv_queue->sync_with_hw(pm->priv_queue, KFD_HIQ_TIMEOUT);
mutex_unlock(&pm->lock);
return 0;
err_acquire_packet_buffer:
mutex_unlock(&pm->lock);
return retval;
}
void pm_release_ib(struct packet_manager *pm)
{
BUG_ON(!pm);
mutex_lock(&pm->lock);
if (pm->allocated) {
kfd2kgd->free_mem(pm->dqm->dev->kgd,
(struct kgd_mem *) pm->ib_buffer_obj);
pm->allocated = false;
}
mutex_unlock(&pm->lock);
}

Просмотреть файл

@ -0,0 +1,96 @@
/*
* Copyright 2014 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*/
#include <linux/slab.h>
#include <linux/types.h>
#include "kfd_priv.h"
static unsigned long *pasid_bitmap;
static unsigned int pasid_limit;
static DEFINE_MUTEX(pasid_mutex);
int kfd_pasid_init(void)
{
pasid_limit = max_num_of_processes;
pasid_bitmap = kzalloc(BITS_TO_LONGS(pasid_limit), GFP_KERNEL);
if (!pasid_bitmap)
return -ENOMEM;
set_bit(0, pasid_bitmap); /* PASID 0 is reserved. */
return 0;
}
void kfd_pasid_exit(void)
{
kfree(pasid_bitmap);
}
bool kfd_set_pasid_limit(unsigned int new_limit)
{
if (new_limit < pasid_limit) {
bool ok;
mutex_lock(&pasid_mutex);
/* ensure that no pasids >= new_limit are in-use */
ok = (find_next_bit(pasid_bitmap, pasid_limit, new_limit) ==
pasid_limit);
if (ok)
pasid_limit = new_limit;
mutex_unlock(&pasid_mutex);
return ok;
}
return true;
}
inline unsigned int kfd_get_pasid_limit(void)
{
return pasid_limit;
}
unsigned int kfd_pasid_alloc(void)
{
unsigned int found;
mutex_lock(&pasid_mutex);
found = find_first_zero_bit(pasid_bitmap, pasid_limit);
if (found == pasid_limit)
found = 0;
else
set_bit(found, pasid_bitmap);
mutex_unlock(&pasid_mutex);
return found;
}
void kfd_pasid_free(unsigned int pasid)
{
BUG_ON(pasid == 0 || pasid >= pasid_limit);
clear_bit(pasid, pasid_bitmap);
}

Просмотреть файл

@ -0,0 +1,405 @@
/*
* Copyright 2014 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
*/
#ifndef KFD_PM4_HEADERS_H_
#define KFD_PM4_HEADERS_H_
#ifndef PM4_MES_HEADER_DEFINED
#define PM4_MES_HEADER_DEFINED
union PM4_MES_TYPE_3_HEADER {
struct {
uint32_t reserved1:8; /* < reserved */
uint32_t opcode:8; /* < IT opcode */
uint32_t count:14; /* < number of DWORDs - 1
* in the information body.
*/
uint32_t type:2; /* < packet identifier.
* It should be 3 for type 3 packets
*/
};
uint32_t u32all;
};
#endif /* PM4_MES_HEADER_DEFINED */
/* --------------------MES_SET_RESOURCES-------------------- */
#ifndef PM4_MES_SET_RESOURCES_DEFINED
#define PM4_MES_SET_RESOURCES_DEFINED
enum set_resources_queue_type_enum {
queue_type__mes_set_resources__kernel_interface_queue_kiq = 0,
queue_type__mes_set_resources__hsa_interface_queue_hiq = 1,
queue_type__mes_set_resources__hsa_debug_interface_queue = 4
};
struct pm4_set_resources {
union {
union PM4_MES_TYPE_3_HEADER header; /* header */
uint32_t ordinal1;
};
union {
struct {
uint32_t vmid_mask:16;
uint32_t unmap_latency:8;
uint32_t reserved1:5;
enum set_resources_queue_type_enum queue_type:3;
} bitfields2;
uint32_t ordinal2;
};
uint32_t queue_mask_lo;
uint32_t queue_mask_hi;
uint32_t gws_mask_lo;
uint32_t gws_mask_hi;
union {
struct {
uint32_t oac_mask:16;
uint32_t reserved2:16;
} bitfields7;
uint32_t ordinal7;
};
union {
struct {
uint32_t gds_heap_base:6;
uint32_t reserved3:5;
uint32_t gds_heap_size:6;
uint32_t reserved4:15;
} bitfields8;
uint32_t ordinal8;
};
};
#endif
/*--------------------MES_RUN_LIST-------------------- */
#ifndef PM4_MES_RUN_LIST_DEFINED
#define PM4_MES_RUN_LIST_DEFINED
struct pm4_runlist {
union {
union PM4_MES_TYPE_3_HEADER header; /* header */
uint32_t ordinal1;
};
union {
struct {
uint32_t reserved1:2;
uint32_t ib_base_lo:30;
} bitfields2;
uint32_t ordinal2;
};
union {
struct {
uint32_t ib_base_hi:16;
uint32_t reserved2:16;
} bitfields3;
uint32_t ordinal3;
};
union {
struct {
uint32_t ib_size:20;
uint32_t chain:1;
uint32_t offload_polling:1;
uint32_t reserved3:1;
uint32_t valid:1;
uint32_t reserved4:8;
} bitfields4;
uint32_t ordinal4;
};
};
#endif
/*--------------------MES_MAP_PROCESS-------------------- */
#ifndef PM4_MES_MAP_PROCESS_DEFINED
#define PM4_MES_MAP_PROCESS_DEFINED
struct pm4_map_process {
union {
union PM4_MES_TYPE_3_HEADER header; /* header */
uint32_t ordinal1;
};
union {
struct {
uint32_t pasid:16;
uint32_t reserved1:8;
uint32_t diq_enable:1;
uint32_t process_quantum:7;
} bitfields2;
uint32_t ordinal2;
};
union {
struct {
uint32_t page_table_base:28;
uint32_t reserved3:4;
} bitfields3;
uint32_t ordinal3;
};
uint32_t sh_mem_bases;
uint32_t sh_mem_ape1_base;
uint32_t sh_mem_ape1_limit;
uint32_t sh_mem_config;
uint32_t gds_addr_lo;
uint32_t gds_addr_hi;
union {
struct {
uint32_t num_gws:6;
uint32_t reserved4:2;
uint32_t num_oac:4;
uint32_t reserved5:4;
uint32_t gds_size:6;
uint32_t num_queues:10;
} bitfields10;
uint32_t ordinal10;
};
};
#endif
/*--------------------MES_MAP_QUEUES--------------------*/
#ifndef PM4_MES_MAP_QUEUES_DEFINED
#define PM4_MES_MAP_QUEUES_DEFINED
enum map_queues_queue_sel_enum {
queue_sel__mes_map_queues__map_to_specified_queue_slots = 0,
queue_sel__mes_map_queues__map_to_hws_determined_queue_slots = 1,
queue_sel__mes_map_queues__enable_process_queues = 2
};
enum map_queues_vidmem_enum {
vidmem__mes_map_queues__uses_no_video_memory = 0,
vidmem__mes_map_queues__uses_video_memory = 1
};
enum map_queues_alloc_format_enum {
alloc_format__mes_map_queues__one_per_pipe = 0,
alloc_format__mes_map_queues__all_on_one_pipe = 1
};
enum map_queues_engine_sel_enum {
engine_sel__mes_map_queues__compute = 0,
engine_sel__mes_map_queues__sdma0 = 2,
engine_sel__mes_map_queues__sdma1 = 3
};
struct pm4_map_queues {
union {
union PM4_MES_TYPE_3_HEADER header; /* header */
uint32_t ordinal1;
};
union {
struct {
uint32_t reserved1:4;
enum map_queues_queue_sel_enum queue_sel:2;
uint32_t reserved2:2;
uint32_t vmid:4;
uint32_t reserved3:4;
enum map_queues_vidmem_enum vidmem:2;
uint32_t reserved4:6;
enum map_queues_alloc_format_enum alloc_format:2;
enum map_queues_engine_sel_enum engine_sel:3;
uint32_t num_queues:3;
} bitfields2;
uint32_t ordinal2;
};
struct {
union {
struct {
uint32_t reserved5:2;
uint32_t doorbell_offset:21;
uint32_t reserved6:3;
uint32_t queue:6;
} bitfields3;
uint32_t ordinal3;
};
uint32_t mqd_addr_lo;
uint32_t mqd_addr_hi;
uint32_t wptr_addr_lo;
uint32_t wptr_addr_hi;
} mes_map_queues_ordinals[1]; /* 1..N of these ordinal groups */
};
#endif
/*--------------------MES_QUERY_STATUS--------------------*/
#ifndef PM4_MES_QUERY_STATUS_DEFINED
#define PM4_MES_QUERY_STATUS_DEFINED
enum query_status_interrupt_sel_enum {
interrupt_sel__mes_query_status__completion_status = 0,
interrupt_sel__mes_query_status__process_status = 1,
interrupt_sel__mes_query_status__queue_status = 2
};
enum query_status_command_enum {
command__mes_query_status__interrupt_only = 0,
command__mes_query_status__fence_only_immediate = 1,
command__mes_query_status__fence_only_after_write_ack = 2,
command__mes_query_status__fence_wait_for_write_ack_send_interrupt = 3
};
enum query_status_engine_sel_enum {
engine_sel__mes_query_status__compute = 0,
engine_sel__mes_query_status__sdma0_queue = 2,
engine_sel__mes_query_status__sdma1_queue = 3
};
struct pm4_query_status {
union {
union PM4_MES_TYPE_3_HEADER header; /* header */
uint32_t ordinal1;
};
union {
struct {
uint32_t context_id:28;
enum query_status_interrupt_sel_enum interrupt_sel:2;
enum query_status_command_enum command:2;
} bitfields2;
uint32_t ordinal2;
};
union {
struct {
uint32_t pasid:16;
uint32_t reserved1:16;
} bitfields3a;
struct {
uint32_t reserved2:2;
uint32_t doorbell_offset:21;
uint32_t reserved3:3;
enum query_status_engine_sel_enum engine_sel:3;
uint32_t reserved4:3;
} bitfields3b;
uint32_t ordinal3;
};
uint32_t addr_lo;
uint32_t addr_hi;
uint32_t data_lo;
uint32_t data_hi;
};
#endif
/*--------------------MES_UNMAP_QUEUES--------------------*/
#ifndef PM4_MES_UNMAP_QUEUES_DEFINED
#define PM4_MES_UNMAP_QUEUES_DEFINED
enum unmap_queues_action_enum {
action__mes_unmap_queues__preempt_queues = 0,
action__mes_unmap_queues__reset_queues = 1,
action__mes_unmap_queues__disable_process_queues = 2
};
enum unmap_queues_queue_sel_enum {
queue_sel__mes_unmap_queues__perform_request_on_specified_queues = 0,
queue_sel__mes_unmap_queues__perform_request_on_pasid_queues = 1,
queue_sel__mes_unmap_queues__perform_request_on_all_active_queues = 2
};
enum unmap_queues_engine_sel_enum {
engine_sel__mes_unmap_queues__compute = 0,
engine_sel__mes_unmap_queues__sdma0 = 2,
engine_sel__mes_unmap_queues__sdma1 = 3
};
struct pm4_unmap_queues {
union {
union PM4_MES_TYPE_3_HEADER header; /* header */
uint32_t ordinal1;
};
union {
struct {
enum unmap_queues_action_enum action:2;
uint32_t reserved1:2;
enum unmap_queues_queue_sel_enum queue_sel:2;
uint32_t reserved2:20;
enum unmap_queues_engine_sel_enum engine_sel:3;
uint32_t num_queues:3;
} bitfields2;
uint32_t ordinal2;
};
union {
struct {
uint32_t pasid:16;
uint32_t reserved3:16;
} bitfields3a;
struct {
uint32_t reserved4:2;
uint32_t doorbell_offset0:21;
uint32_t reserved5:9;
} bitfields3b;
uint32_t ordinal3;
};
union {
struct {
uint32_t reserved6:2;
uint32_t doorbell_offset1:21;
uint32_t reserved7:9;
} bitfields4;
uint32_t ordinal4;
};
union {
struct {
uint32_t reserved8:2;
uint32_t doorbell_offset2:21;
uint32_t reserved9:9;
} bitfields5;
uint32_t ordinal5;
};
union {
struct {
uint32_t reserved10:2;
uint32_t doorbell_offset3:21;
uint32_t reserved11:9;
} bitfields6;
uint32_t ordinal6;
};
};
#endif
enum {
CACHE_FLUSH_AND_INV_TS_EVENT = 0x00000014
};
#endif /* KFD_PM4_HEADERS_H_ */

Просмотреть файл

@ -0,0 +1,107 @@
/*
* Copyright 2014 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
*/
#ifndef KFD_PM4_OPCODES_H
#define KFD_PM4_OPCODES_H
enum it_opcode_type {
IT_NOP = 0x10,
IT_SET_BASE = 0x11,
IT_CLEAR_STATE = 0x12,
IT_INDEX_BUFFER_SIZE = 0x13,
IT_DISPATCH_DIRECT = 0x15,
IT_DISPATCH_INDIRECT = 0x16,
IT_ATOMIC_GDS = 0x1D,
IT_OCCLUSION_QUERY = 0x1F,
IT_SET_PREDICATION = 0x20,
IT_REG_RMW = 0x21,
IT_COND_EXEC = 0x22,
IT_PRED_EXEC = 0x23,
IT_DRAW_INDIRECT = 0x24,
IT_DRAW_INDEX_INDIRECT = 0x25,
IT_INDEX_BASE = 0x26,
IT_DRAW_INDEX_2 = 0x27,
IT_CONTEXT_CONTROL = 0x28,
IT_INDEX_TYPE = 0x2A,
IT_DRAW_INDIRECT_MULTI = 0x2C,
IT_DRAW_INDEX_AUTO = 0x2D,
IT_NUM_INSTANCES = 0x2F,
IT_DRAW_INDEX_MULTI_AUTO = 0x30,
IT_INDIRECT_BUFFER_CNST = 0x33,
IT_STRMOUT_BUFFER_UPDATE = 0x34,
IT_DRAW_INDEX_OFFSET_2 = 0x35,
IT_DRAW_PREAMBLE = 0x36,
IT_WRITE_DATA = 0x37,
IT_DRAW_INDEX_INDIRECT_MULTI = 0x38,
IT_MEM_SEMAPHORE = 0x39,
IT_COPY_DW = 0x3B,
IT_WAIT_REG_MEM = 0x3C,
IT_INDIRECT_BUFFER = 0x3F,
IT_COPY_DATA = 0x40,
IT_PFP_SYNC_ME = 0x42,
IT_SURFACE_SYNC = 0x43,
IT_COND_WRITE = 0x45,
IT_EVENT_WRITE = 0x46,
IT_EVENT_WRITE_EOP = 0x47,
IT_EVENT_WRITE_EOS = 0x48,
IT_RELEASE_MEM = 0x49,
IT_PREAMBLE_CNTL = 0x4A,
IT_DMA_DATA = 0x50,
IT_ACQUIRE_MEM = 0x58,
IT_REWIND = 0x59,
IT_LOAD_UCONFIG_REG = 0x5E,
IT_LOAD_SH_REG = 0x5F,
IT_LOAD_CONFIG_REG = 0x60,
IT_LOAD_CONTEXT_REG = 0x61,
IT_SET_CONFIG_REG = 0x68,
IT_SET_CONTEXT_REG = 0x69,
IT_SET_CONTEXT_REG_INDIRECT = 0x73,
IT_SET_SH_REG = 0x76,
IT_SET_SH_REG_OFFSET = 0x77,
IT_SET_QUEUE_REG = 0x78,
IT_SET_UCONFIG_REG = 0x79,
IT_SCRATCH_RAM_WRITE = 0x7D,
IT_SCRATCH_RAM_READ = 0x7E,
IT_LOAD_CONST_RAM = 0x80,
IT_WRITE_CONST_RAM = 0x81,
IT_DUMP_CONST_RAM = 0x83,
IT_INCREMENT_CE_COUNTER = 0x84,
IT_INCREMENT_DE_COUNTER = 0x85,
IT_WAIT_ON_CE_COUNTER = 0x86,
IT_WAIT_ON_DE_COUNTER_DIFF = 0x88,
IT_SWITCH_BUFFER = 0x8B,
IT_SET_RESOURCES = 0xA0,
IT_MAP_PROCESS = 0xA1,
IT_MAP_QUEUES = 0xA2,
IT_UNMAP_QUEUES = 0xA3,
IT_QUERY_STATUS = 0xA4,
IT_RUN_LIST = 0xA5,
};
#define PM4_TYPE_0 0
#define PM4_TYPE_2 2
#define PM4_TYPE_3 3
#endif /* KFD_PM4_OPCODES_H */

Просмотреть файл

@ -0,0 +1,600 @@
/*
* Copyright 2014 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*/
#ifndef KFD_PRIV_H_INCLUDED
#define KFD_PRIV_H_INCLUDED
#include <linux/hashtable.h>
#include <linux/mmu_notifier.h>
#include <linux/mutex.h>
#include <linux/types.h>
#include <linux/atomic.h>
#include <linux/workqueue.h>
#include <linux/spinlock.h>
#include <linux/kfd_ioctl.h>
#include <kgd_kfd_interface.h>
#define KFD_SYSFS_FILE_MODE 0444
/*
* When working with cp scheduler we should assign the HIQ manually or via
* the radeon driver to a fixed hqd slot, here are the fixed HIQ hqd slot
* definitions for Kaveri. In Kaveri only the first ME queues participates
* in the cp scheduling taking that in mind we set the HIQ slot in the
* second ME.
*/
#define KFD_CIK_HIQ_PIPE 4
#define KFD_CIK_HIQ_QUEUE 0
/* GPU ID hash width in bits */
#define KFD_GPU_ID_HASH_WIDTH 16
/* Macro for allocating structures */
#define kfd_alloc_struct(ptr_to_struct) \
((typeof(ptr_to_struct)) kzalloc(sizeof(*ptr_to_struct), GFP_KERNEL))
/* Kernel module parameter to specify maximum number of supported processes */
extern int max_num_of_processes;
#define KFD_MAX_NUM_OF_PROCESSES_DEFAULT 32
#define KFD_MAX_NUM_OF_PROCESSES 512
/*
* Kernel module parameter to specify maximum number of supported queues
* per process
*/
extern int max_num_of_queues_per_process;
#define KFD_MAX_NUM_OF_QUEUES_PER_PROCESS_DEFAULT 128
#define KFD_MAX_NUM_OF_QUEUES_PER_PROCESS 1024
#define KFD_KERNEL_QUEUE_SIZE 2048
/* Kernel module parameter to specify the scheduling policy */
extern int sched_policy;
/**
* enum kfd_sched_policy
*
* @KFD_SCHED_POLICY_HWS: H/W scheduling policy known as command processor (cp)
* scheduling. In this scheduling mode we're using the firmware code to
* schedule the user mode queues and kernel queues such as HIQ and DIQ.
* the HIQ queue is used as a special queue that dispatches the configuration
* to the cp and the user mode queues list that are currently running.
* the DIQ queue is a debugging queue that dispatches debugging commands to the
* firmware.
* in this scheduling mode user mode queues over subscription feature is
* enabled.
*
* @KFD_SCHED_POLICY_HWS_NO_OVERSUBSCRIPTION: The same as above but the over
* subscription feature disabled.
*
* @KFD_SCHED_POLICY_NO_HWS: no H/W scheduling policy is a mode which directly
* set the command processor registers and sets the queues "manually". This
* mode is used *ONLY* for debugging proposes.
*
*/
enum kfd_sched_policy {
KFD_SCHED_POLICY_HWS = 0,
KFD_SCHED_POLICY_HWS_NO_OVERSUBSCRIPTION,
KFD_SCHED_POLICY_NO_HWS
};
enum cache_policy {
cache_policy_coherent,
cache_policy_noncoherent
};
struct kfd_device_info {
unsigned int max_pasid_bits;
size_t ih_ring_entry_size;
uint16_t mqd_size_aligned;
};
struct kfd_dev {
struct kgd_dev *kgd;
const struct kfd_device_info *device_info;
struct pci_dev *pdev;
unsigned int id; /* topology stub index */
phys_addr_t doorbell_base; /* Start of actual doorbells used by
* KFD. It is aligned for mapping
* into user mode
*/
size_t doorbell_id_offset; /* Doorbell offset (from KFD doorbell
* to HW doorbell, GFX reserved some
* at the start)
*/
size_t doorbell_process_limit; /* Number of processes we have doorbell
* space for.
*/
u32 __iomem *doorbell_kernel_ptr; /* This is a pointer for a doorbells
* page used by kernel queue
*/
struct kgd2kfd_shared_resources shared_resources;
void *interrupt_ring;
size_t interrupt_ring_size;
atomic_t interrupt_ring_rptr;
atomic_t interrupt_ring_wptr;
struct work_struct interrupt_work;
spinlock_t interrupt_lock;
/* QCM Device instance */
struct device_queue_manager *dqm;
bool init_complete;
/*
* Interrupts of interest to KFD are copied
* from the HW ring into a SW ring.
*/
bool interrupts_active;
};
/* KGD2KFD callbacks */
void kgd2kfd_exit(void);
struct kfd_dev *kgd2kfd_probe(struct kgd_dev *kgd, struct pci_dev *pdev);
bool kgd2kfd_device_init(struct kfd_dev *kfd,
const struct kgd2kfd_shared_resources *gpu_resources);
void kgd2kfd_device_exit(struct kfd_dev *kfd);
extern const struct kfd2kgd_calls *kfd2kgd;
struct kfd_mem_obj {
void *bo;
uint64_t gpu_addr;
uint32_t *cpu_ptr;
};
enum kfd_mempool {
KFD_MEMPOOL_SYSTEM_CACHEABLE = 1,
KFD_MEMPOOL_SYSTEM_WRITECOMBINE = 2,
KFD_MEMPOOL_FRAMEBUFFER = 3,
};
/* Character device interface */
int kfd_chardev_init(void);
void kfd_chardev_exit(void);
struct device *kfd_chardev(void);
/**
* enum kfd_preempt_type_filter
*
* @KFD_PREEMPT_TYPE_FILTER_SINGLE_QUEUE: Preempts single queue.
*
* @KFD_PRERMPT_TYPE_FILTER_ALL_QUEUES: Preempts all queues in the
* running queues list.
*
* @KFD_PRERMPT_TYPE_FILTER_BY_PASID: Preempts queues that belongs to
* specific process.
*
*/
enum kfd_preempt_type_filter {
KFD_PREEMPT_TYPE_FILTER_SINGLE_QUEUE,
KFD_PREEMPT_TYPE_FILTER_ALL_QUEUES,
KFD_PREEMPT_TYPE_FILTER_BY_PASID
};
enum kfd_preempt_type {
KFD_PREEMPT_TYPE_WAVEFRONT,
KFD_PREEMPT_TYPE_WAVEFRONT_RESET
};
/**
* enum kfd_queue_type
*
* @KFD_QUEUE_TYPE_COMPUTE: Regular user mode queue type.
*
* @KFD_QUEUE_TYPE_SDMA: Sdma user mode queue type.
*
* @KFD_QUEUE_TYPE_HIQ: HIQ queue type.
*
* @KFD_QUEUE_TYPE_DIQ: DIQ queue type.
*/
enum kfd_queue_type {
KFD_QUEUE_TYPE_COMPUTE,
KFD_QUEUE_TYPE_SDMA,
KFD_QUEUE_TYPE_HIQ,
KFD_QUEUE_TYPE_DIQ
};
enum kfd_queue_format {
KFD_QUEUE_FORMAT_PM4,
KFD_QUEUE_FORMAT_AQL
};
/**
* struct queue_properties
*
* @type: The queue type.
*
* @queue_id: Queue identifier.
*
* @queue_address: Queue ring buffer address.
*
* @queue_size: Queue ring buffer size.
*
* @priority: Defines the queue priority relative to other queues in the
* process.
* This is just an indication and HW scheduling may override the priority as
* necessary while keeping the relative prioritization.
* the priority granularity is from 0 to f which f is the highest priority.
* currently all queues are initialized with the highest priority.
*
* @queue_percent: This field is partially implemented and currently a zero in
* this field defines that the queue is non active.
*
* @read_ptr: User space address which points to the number of dwords the
* cp read from the ring buffer. This field updates automatically by the H/W.
*
* @write_ptr: Defines the number of dwords written to the ring buffer.
*
* @doorbell_ptr: This field aim is to notify the H/W of new packet written to
* the queue ring buffer. This field should be similar to write_ptr and the user
* should update this field after he updated the write_ptr.
*
* @doorbell_off: The doorbell offset in the doorbell pci-bar.
*
* @is_interop: Defines if this is a interop queue. Interop queue means that the
* queue can access both graphics and compute resources.
*
* @is_active: Defines if the queue is active or not.
*
* @vmid: If the scheduling mode is no cp scheduling the field defines the vmid
* of the queue.
*
* This structure represents the queue properties for each queue no matter if
* it's user mode or kernel mode queue.
*
*/
struct queue_properties {
enum kfd_queue_type type;
enum kfd_queue_format format;
unsigned int queue_id;
uint64_t queue_address;
uint64_t queue_size;
uint32_t priority;
uint32_t queue_percent;
uint32_t *read_ptr;
uint32_t *write_ptr;
uint32_t __iomem *doorbell_ptr;
uint32_t doorbell_off;
bool is_interop;
bool is_active;
/* Not relevant for user mode queues in cp scheduling */
unsigned int vmid;
};
/**
* struct queue
*
* @list: Queue linked list.
*
* @mqd: The queue MQD.
*
* @mqd_mem_obj: The MQD local gpu memory object.
*
* @gart_mqd_addr: The MQD gart mc address.
*
* @properties: The queue properties.
*
* @mec: Used only in no cp scheduling mode and identifies to micro engine id
* that the queue should be execute on.
*
* @pipe: Used only in no cp scheduling mode and identifies the queue's pipe id.
*
* @queue: Used only in no cp scheduliong mode and identifies the queue's slot.
*
* @process: The kfd process that created this queue.
*
* @device: The kfd device that created this queue.
*
* This structure represents user mode compute queues.
* It contains all the necessary data to handle such queues.
*
*/
struct queue {
struct list_head list;
void *mqd;
struct kfd_mem_obj *mqd_mem_obj;
uint64_t gart_mqd_addr;
struct queue_properties properties;
uint32_t mec;
uint32_t pipe;
uint32_t queue;
struct kfd_process *process;
struct kfd_dev *device;
};
/*
* Please read the kfd_mqd_manager.h description.
*/
enum KFD_MQD_TYPE {
KFD_MQD_TYPE_CIK_COMPUTE = 0, /* for no cp scheduling */
KFD_MQD_TYPE_CIK_HIQ, /* for hiq */
KFD_MQD_TYPE_CIK_CP, /* for cp queues and diq */
KFD_MQD_TYPE_CIK_SDMA, /* for sdma queues */
KFD_MQD_TYPE_MAX
};
struct scheduling_resources {
unsigned int vmid_mask;
enum kfd_queue_type type;
uint64_t queue_mask;
uint64_t gws_mask;
uint32_t oac_mask;
uint32_t gds_heap_base;
uint32_t gds_heap_size;
};
struct process_queue_manager {
/* data */
struct kfd_process *process;
unsigned int num_concurrent_processes;
struct list_head queues;
unsigned long *queue_slot_bitmap;
};
struct qcm_process_device {
/* The Device Queue Manager that owns this data */
struct device_queue_manager *dqm;
struct process_queue_manager *pqm;
/* Device Queue Manager lock */
struct mutex *lock;
/* Queues list */
struct list_head queues_list;
struct list_head priv_queue_list;
unsigned int queue_count;
unsigned int vmid;
bool is_debug;
/*
* All the memory management data should be here too
*/
uint64_t gds_context_area;
uint32_t sh_mem_config;
uint32_t sh_mem_bases;
uint32_t sh_mem_ape1_base;
uint32_t sh_mem_ape1_limit;
uint32_t page_table_base;
uint32_t gds_size;
uint32_t num_gws;
uint32_t num_oac;
};
/* Data that is per-process-per device. */
struct kfd_process_device {
/*
* List of all per-device data for a process.
* Starts from kfd_process.per_device_data.
*/
struct list_head per_device_list;
/* The device that owns this data. */
struct kfd_dev *dev;
/* per-process-per device QCM data structure */
struct qcm_process_device qpd;
/*Apertures*/
uint64_t lds_base;
uint64_t lds_limit;
uint64_t gpuvm_base;
uint64_t gpuvm_limit;
uint64_t scratch_base;
uint64_t scratch_limit;
/* Is this process/pasid bound to this device? (amd_iommu_bind_pasid) */
bool bound;
};
#define qpd_to_pdd(x) container_of(x, struct kfd_process_device, qpd)
/* Process data */
struct kfd_process {
/*
* kfd_process are stored in an mm_struct*->kfd_process*
* hash table (kfd_processes in kfd_process.c)
*/
struct hlist_node kfd_processes;
struct mm_struct *mm;
struct mutex mutex;
/*
* In any process, the thread that started main() is the lead
* thread and outlives the rest.
* It is here because amd_iommu_bind_pasid wants a task_struct.
*/
struct task_struct *lead_thread;
/* We want to receive a notification when the mm_struct is destroyed */
struct mmu_notifier mmu_notifier;
/* Use for delayed freeing of kfd_process structure */
struct rcu_head rcu;
unsigned int pasid;
/*
* List of kfd_process_device structures,
* one for each device the process is using.
*/
struct list_head per_device_data;
struct process_queue_manager pqm;
/* The process's queues. */
size_t queue_array_size;
/* Size is queue_array_size, up to MAX_PROCESS_QUEUES. */
struct kfd_queue **queues;
unsigned long allocated_queue_bitmap[DIV_ROUND_UP(KFD_MAX_NUM_OF_QUEUES_PER_PROCESS, BITS_PER_LONG)];
/*Is the user space process 32 bit?*/
bool is_32bit_user_mode;
};
void kfd_process_create_wq(void);
void kfd_process_destroy_wq(void);
struct kfd_process *kfd_create_process(const struct task_struct *);
struct kfd_process *kfd_get_process(const struct task_struct *);
struct kfd_process_device *kfd_bind_process_to_device(struct kfd_dev *dev,
struct kfd_process *p);
void kfd_unbind_process_from_device(struct kfd_dev *dev, unsigned int pasid);
struct kfd_process_device *kfd_get_process_device_data(struct kfd_dev *dev,
struct kfd_process *p,
int create_pdd);
/* Process device data iterator */
struct kfd_process_device *kfd_get_first_process_device_data(struct kfd_process *p);
struct kfd_process_device *kfd_get_next_process_device_data(struct kfd_process *p,
struct kfd_process_device *pdd);
bool kfd_has_process_device_data(struct kfd_process *p);
/* PASIDs */
int kfd_pasid_init(void);
void kfd_pasid_exit(void);
bool kfd_set_pasid_limit(unsigned int new_limit);
unsigned int kfd_get_pasid_limit(void);
unsigned int kfd_pasid_alloc(void);
void kfd_pasid_free(unsigned int pasid);
/* Doorbells */
void kfd_doorbell_init(struct kfd_dev *kfd);
int kfd_doorbell_mmap(struct kfd_process *process, struct vm_area_struct *vma);
u32 __iomem *kfd_get_kernel_doorbell(struct kfd_dev *kfd,
unsigned int *doorbell_off);
void kfd_release_kernel_doorbell(struct kfd_dev *kfd, u32 __iomem *db_addr);
u32 read_kernel_doorbell(u32 __iomem *db);
void write_kernel_doorbell(u32 __iomem *db, u32 value);
unsigned int kfd_queue_id_to_doorbell(struct kfd_dev *kfd,
struct kfd_process *process,
unsigned int queue_id);
extern struct device *kfd_device;
/* Topology */
int kfd_topology_init(void);
void kfd_topology_shutdown(void);
int kfd_topology_add_device(struct kfd_dev *gpu);
int kfd_topology_remove_device(struct kfd_dev *gpu);
struct kfd_dev *kfd_device_by_id(uint32_t gpu_id);
struct kfd_dev *kfd_device_by_pci_dev(const struct pci_dev *pdev);
struct kfd_dev *kfd_topology_enum_kfd_devices(uint8_t idx);
/* Interrupts */
int kfd_interrupt_init(struct kfd_dev *dev);
void kfd_interrupt_exit(struct kfd_dev *dev);
void kgd2kfd_interrupt(struct kfd_dev *kfd, const void *ih_ring_entry);
bool enqueue_ih_ring_entry(struct kfd_dev *kfd, const void *ih_ring_entry);
/* Power Management */
void kgd2kfd_suspend(struct kfd_dev *kfd);
int kgd2kfd_resume(struct kfd_dev *kfd);
/* amdkfd Apertures */
int kfd_init_apertures(struct kfd_process *process);
/* Queue Context Management */
inline uint32_t lower_32(uint64_t x);
inline uint32_t upper_32(uint64_t x);
int init_queue(struct queue **q, struct queue_properties properties);
void uninit_queue(struct queue *q);
void print_queue_properties(struct queue_properties *q);
void print_queue(struct queue *q);
struct mqd_manager *mqd_manager_init(enum KFD_MQD_TYPE type,
struct kfd_dev *dev);
struct device_queue_manager *device_queue_manager_init(struct kfd_dev *dev);
void device_queue_manager_uninit(struct device_queue_manager *dqm);
struct kernel_queue *kernel_queue_init(struct kfd_dev *dev,
enum kfd_queue_type type);
void kernel_queue_uninit(struct kernel_queue *kq);
/* Process Queue Manager */
struct process_queue_node {
struct queue *q;
struct kernel_queue *kq;
struct list_head process_queue_list;
};
int pqm_init(struct process_queue_manager *pqm, struct kfd_process *p);
void pqm_uninit(struct process_queue_manager *pqm);
int pqm_create_queue(struct process_queue_manager *pqm,
struct kfd_dev *dev,
struct file *f,
struct queue_properties *properties,
unsigned int flags,
enum kfd_queue_type type,
unsigned int *qid);
int pqm_destroy_queue(struct process_queue_manager *pqm, unsigned int qid);
int pqm_update_queue(struct process_queue_manager *pqm, unsigned int qid,
struct queue_properties *p);
/* Packet Manager */
#define KFD_HIQ_TIMEOUT (500)
#define KFD_FENCE_COMPLETED (100)
#define KFD_FENCE_INIT (10)
#define KFD_UNMAP_LATENCY (150)
struct packet_manager {
struct device_queue_manager *dqm;
struct kernel_queue *priv_queue;
struct mutex lock;
bool allocated;
struct kfd_mem_obj *ib_buffer_obj;
};
int pm_init(struct packet_manager *pm, struct device_queue_manager *dqm);
void pm_uninit(struct packet_manager *pm);
int pm_send_set_resources(struct packet_manager *pm,
struct scheduling_resources *res);
int pm_send_runlist(struct packet_manager *pm, struct list_head *dqm_queues);
int pm_send_query_status(struct packet_manager *pm, uint64_t fence_address,
uint32_t fence_value);
int pm_send_unmap_queue(struct packet_manager *pm, enum kfd_queue_type type,
enum kfd_preempt_type_filter mode,
uint32_t filter_param, bool reset,
unsigned int sdma_engine);
void pm_release_ib(struct packet_manager *pm);
uint64_t kfd_get_number_elems(struct kfd_dev *kfd);
phys_addr_t kfd_get_process_doorbells(struct kfd_dev *dev,
struct kfd_process *process);
#endif

Просмотреть файл

@ -0,0 +1,410 @@
/*
* Copyright 2014 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*/
#include <linux/mutex.h>
#include <linux/log2.h>
#include <linux/sched.h>
#include <linux/slab.h>
#include <linux/amd-iommu.h>
#include <linux/notifier.h>
struct mm_struct;
#include "kfd_priv.h"
/*
* Initial size for the array of queues.
* The allocated size is doubled each time
* it is exceeded up to MAX_PROCESS_QUEUES.
*/
#define INITIAL_QUEUE_ARRAY_SIZE 16
/*
* List of struct kfd_process (field kfd_process).
* Unique/indexed by mm_struct*
*/
#define KFD_PROCESS_TABLE_SIZE 5 /* bits: 32 entries */
static DEFINE_HASHTABLE(kfd_processes_table, KFD_PROCESS_TABLE_SIZE);
static DEFINE_MUTEX(kfd_processes_mutex);
DEFINE_STATIC_SRCU(kfd_processes_srcu);
static struct workqueue_struct *kfd_process_wq;
struct kfd_process_release_work {
struct work_struct kfd_work;
struct kfd_process *p;
};
static struct kfd_process *find_process(const struct task_struct *thread);
static struct kfd_process *create_process(const struct task_struct *thread);
void kfd_process_create_wq(void)
{
if (!kfd_process_wq)
kfd_process_wq = create_workqueue("kfd_process_wq");
}
void kfd_process_destroy_wq(void)
{
if (kfd_process_wq) {
flush_workqueue(kfd_process_wq);
destroy_workqueue(kfd_process_wq);
kfd_process_wq = NULL;
}
}
struct kfd_process *kfd_create_process(const struct task_struct *thread)
{
struct kfd_process *process;
BUG_ON(!kfd_process_wq);
if (thread->mm == NULL)
return ERR_PTR(-EINVAL);
/* Only the pthreads threading model is supported. */
if (thread->group_leader->mm != thread->mm)
return ERR_PTR(-EINVAL);
/* Take mmap_sem because we call __mmu_notifier_register inside */
down_write(&thread->mm->mmap_sem);
/*
* take kfd processes mutex before starting of process creation
* so there won't be a case where two threads of the same process
* create two kfd_process structures
*/
mutex_lock(&kfd_processes_mutex);
/* A prior open of /dev/kfd could have already created the process. */
process = find_process(thread);
if (process)
pr_debug("kfd: process already found\n");
if (!process)
process = create_process(thread);
mutex_unlock(&kfd_processes_mutex);
up_write(&thread->mm->mmap_sem);
return process;
}
struct kfd_process *kfd_get_process(const struct task_struct *thread)
{
struct kfd_process *process;
if (thread->mm == NULL)
return ERR_PTR(-EINVAL);
/* Only the pthreads threading model is supported. */
if (thread->group_leader->mm != thread->mm)
return ERR_PTR(-EINVAL);
process = find_process(thread);
return process;
}
static struct kfd_process *find_process_by_mm(const struct mm_struct *mm)
{
struct kfd_process *process;
hash_for_each_possible_rcu(kfd_processes_table, process,
kfd_processes, (uintptr_t)mm)
if (process->mm == mm)
return process;
return NULL;
}
static struct kfd_process *find_process(const struct task_struct *thread)
{
struct kfd_process *p;
int idx;
idx = srcu_read_lock(&kfd_processes_srcu);
p = find_process_by_mm(thread->mm);
srcu_read_unlock(&kfd_processes_srcu, idx);
return p;
}
static void kfd_process_wq_release(struct work_struct *work)
{
struct kfd_process_release_work *my_work;
struct kfd_process_device *pdd, *temp;
struct kfd_process *p;
my_work = (struct kfd_process_release_work *) work;
p = my_work->p;
mutex_lock(&p->mutex);
list_for_each_entry_safe(pdd, temp, &p->per_device_data,
per_device_list) {
amd_iommu_unbind_pasid(pdd->dev->pdev, p->pasid);
list_del(&pdd->per_device_list);
kfree(pdd);
}
kfd_pasid_free(p->pasid);
mutex_unlock(&p->mutex);
mutex_destroy(&p->mutex);
kfree(p->queues);
kfree(p);
kfree((void *)work);
}
static void kfd_process_destroy_delayed(struct rcu_head *rcu)
{
struct kfd_process_release_work *work;
struct kfd_process *p;
BUG_ON(!kfd_process_wq);
p = container_of(rcu, struct kfd_process, rcu);
BUG_ON(atomic_read(&p->mm->mm_count) <= 0);
mmdrop(p->mm);
work = (struct kfd_process_release_work *)
kmalloc(sizeof(struct kfd_process_release_work), GFP_ATOMIC);
if (work) {
INIT_WORK((struct work_struct *) work, kfd_process_wq_release);
work->p = p;
queue_work(kfd_process_wq, (struct work_struct *) work);
}
}
static void kfd_process_notifier_release(struct mmu_notifier *mn,
struct mm_struct *mm)
{
struct kfd_process *p;
/*
* The kfd_process structure can not be free because the
* mmu_notifier srcu is read locked
*/
p = container_of(mn, struct kfd_process, mmu_notifier);
BUG_ON(p->mm != mm);
mutex_lock(&kfd_processes_mutex);
hash_del_rcu(&p->kfd_processes);
mutex_unlock(&kfd_processes_mutex);
synchronize_srcu(&kfd_processes_srcu);
mutex_lock(&p->mutex);
/* In case our notifier is called before IOMMU notifier */
pqm_uninit(&p->pqm);
mutex_unlock(&p->mutex);
/*
* Because we drop mm_count inside kfd_process_destroy_delayed
* and because the mmu_notifier_unregister function also drop
* mm_count we need to take an extra count here.
*/
atomic_inc(&p->mm->mm_count);
mmu_notifier_unregister_no_release(&p->mmu_notifier, p->mm);
mmu_notifier_call_srcu(&p->rcu, &kfd_process_destroy_delayed);
}
static const struct mmu_notifier_ops kfd_process_mmu_notifier_ops = {
.release = kfd_process_notifier_release,
};
static struct kfd_process *create_process(const struct task_struct *thread)
{
struct kfd_process *process;
int err = -ENOMEM;
process = kzalloc(sizeof(*process), GFP_KERNEL);
if (!process)
goto err_alloc_process;
process->queues = kmalloc_array(INITIAL_QUEUE_ARRAY_SIZE,
sizeof(process->queues[0]), GFP_KERNEL);
if (!process->queues)
goto err_alloc_queues;
process->pasid = kfd_pasid_alloc();
if (process->pasid == 0)
goto err_alloc_pasid;
mutex_init(&process->mutex);
process->mm = thread->mm;
/* register notifier */
process->mmu_notifier.ops = &kfd_process_mmu_notifier_ops;
err = __mmu_notifier_register(&process->mmu_notifier, process->mm);
if (err)
goto err_mmu_notifier;
hash_add_rcu(kfd_processes_table, &process->kfd_processes,
(uintptr_t)process->mm);
process->lead_thread = thread->group_leader;
process->queue_array_size = INITIAL_QUEUE_ARRAY_SIZE;
INIT_LIST_HEAD(&process->per_device_data);
err = pqm_init(&process->pqm, process);
if (err != 0)
goto err_process_pqm_init;
return process;
err_process_pqm_init:
hash_del_rcu(&process->kfd_processes);
synchronize_rcu();
mmu_notifier_unregister_no_release(&process->mmu_notifier, process->mm);
err_mmu_notifier:
kfd_pasid_free(process->pasid);
err_alloc_pasid:
kfree(process->queues);
err_alloc_queues:
kfree(process);
err_alloc_process:
return ERR_PTR(err);
}
struct kfd_process_device *kfd_get_process_device_data(struct kfd_dev *dev,
struct kfd_process *p,
int create_pdd)
{
struct kfd_process_device *pdd = NULL;
list_for_each_entry(pdd, &p->per_device_data, per_device_list)
if (pdd->dev == dev)
return pdd;
if (create_pdd) {
pdd = kzalloc(sizeof(*pdd), GFP_KERNEL);
if (pdd != NULL) {
pdd->dev = dev;
INIT_LIST_HEAD(&pdd->qpd.queues_list);
INIT_LIST_HEAD(&pdd->qpd.priv_queue_list);
pdd->qpd.dqm = dev->dqm;
list_add(&pdd->per_device_list, &p->per_device_data);
}
}
return pdd;
}
/*
* Direct the IOMMU to bind the process (specifically the pasid->mm)
* to the device.
* Unbinding occurs when the process dies or the device is removed.
*
* Assumes that the process lock is held.
*/
struct kfd_process_device *kfd_bind_process_to_device(struct kfd_dev *dev,
struct kfd_process *p)
{
struct kfd_process_device *pdd = kfd_get_process_device_data(dev, p, 1);
int err;
if (pdd == NULL)
return ERR_PTR(-ENOMEM);
if (pdd->bound)
return pdd;
err = amd_iommu_bind_pasid(dev->pdev, p->pasid, p->lead_thread);
if (err < 0)
return ERR_PTR(err);
pdd->bound = true;
return pdd;
}
void kfd_unbind_process_from_device(struct kfd_dev *dev, unsigned int pasid)
{
struct kfd_process *p;
struct kfd_process_device *pdd;
int idx, i;
BUG_ON(dev == NULL);
idx = srcu_read_lock(&kfd_processes_srcu);
hash_for_each_rcu(kfd_processes_table, i, p, kfd_processes)
if (p->pasid == pasid)
break;
srcu_read_unlock(&kfd_processes_srcu, idx);
BUG_ON(p->pasid != pasid);
mutex_lock(&p->mutex);
pqm_uninit(&p->pqm);
pdd = kfd_get_process_device_data(dev, p, 0);
/*
* Just mark pdd as unbound, because we still need it to call
* amd_iommu_unbind_pasid() in when the process exits.
* We don't call amd_iommu_unbind_pasid() here
* because the IOMMU called us.
*/
if (pdd)
pdd->bound = false;
mutex_unlock(&p->mutex);
}
struct kfd_process_device *kfd_get_first_process_device_data(struct kfd_process *p)
{
return list_first_entry(&p->per_device_data,
struct kfd_process_device,
per_device_list);
}
struct kfd_process_device *kfd_get_next_process_device_data(struct kfd_process *p,
struct kfd_process_device *pdd)
{
if (list_is_last(&pdd->per_device_list, &p->per_device_data))
return NULL;
return list_next_entry(pdd, per_device_list);
}
bool kfd_has_process_device_data(struct kfd_process *p)
{
return !(list_empty(&p->per_device_data));
}

Просмотреть файл

@ -0,0 +1,343 @@
/*
* Copyright 2014 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
*/
#include <linux/slab.h>
#include <linux/list.h>
#include "kfd_device_queue_manager.h"
#include "kfd_priv.h"
#include "kfd_kernel_queue.h"
static inline struct process_queue_node *get_queue_by_qid(
struct process_queue_manager *pqm, unsigned int qid)
{
struct process_queue_node *pqn;
BUG_ON(!pqm);
list_for_each_entry(pqn, &pqm->queues, process_queue_list) {
if (pqn->q && pqn->q->properties.queue_id == qid)
return pqn;
if (pqn->kq && pqn->kq->queue->properties.queue_id == qid)
return pqn;
}
return NULL;
}
static int find_available_queue_slot(struct process_queue_manager *pqm,
unsigned int *qid)
{
unsigned long found;
BUG_ON(!pqm || !qid);
pr_debug("kfd: in %s\n", __func__);
found = find_first_zero_bit(pqm->queue_slot_bitmap,
max_num_of_queues_per_process);
pr_debug("kfd: the new slot id %lu\n", found);
if (found >= max_num_of_queues_per_process) {
pr_info("amdkfd: Can not open more queues for process with pasid %d\n",
pqm->process->pasid);
return -ENOMEM;
}
set_bit(found, pqm->queue_slot_bitmap);
*qid = found;
return 0;
}
int pqm_init(struct process_queue_manager *pqm, struct kfd_process *p)
{
BUG_ON(!pqm);
INIT_LIST_HEAD(&pqm->queues);
pqm->queue_slot_bitmap =
kzalloc(DIV_ROUND_UP(max_num_of_queues_per_process,
BITS_PER_BYTE), GFP_KERNEL);
if (pqm->queue_slot_bitmap == NULL)
return -ENOMEM;
pqm->process = p;
return 0;
}
void pqm_uninit(struct process_queue_manager *pqm)
{
int retval;
struct process_queue_node *pqn, *next;
BUG_ON(!pqm);
pr_debug("In func %s\n", __func__);
list_for_each_entry_safe(pqn, next, &pqm->queues, process_queue_list) {
retval = pqm_destroy_queue(
pqm,
(pqn->q != NULL) ?
pqn->q->properties.queue_id :
pqn->kq->queue->properties.queue_id);
if (retval != 0) {
pr_err("kfd: failed to destroy queue\n");
return;
}
}
kfree(pqm->queue_slot_bitmap);
pqm->queue_slot_bitmap = NULL;
}
static int create_cp_queue(struct process_queue_manager *pqm,
struct kfd_dev *dev, struct queue **q,
struct queue_properties *q_properties,
struct file *f, unsigned int qid)
{
int retval;
retval = 0;
/* Doorbell initialized in user space*/
q_properties->doorbell_ptr = NULL;
q_properties->doorbell_off =
kfd_queue_id_to_doorbell(dev, pqm->process, qid);
/* let DQM handle it*/
q_properties->vmid = 0;
q_properties->queue_id = qid;
q_properties->type = KFD_QUEUE_TYPE_COMPUTE;
retval = init_queue(q, *q_properties);
if (retval != 0)
goto err_init_queue;
(*q)->device = dev;
(*q)->process = pqm->process;
pr_debug("kfd: PQM After init queue");
return retval;
err_init_queue:
return retval;
}
int pqm_create_queue(struct process_queue_manager *pqm,
struct kfd_dev *dev,
struct file *f,
struct queue_properties *properties,
unsigned int flags,
enum kfd_queue_type type,
unsigned int *qid)
{
int retval;
struct kfd_process_device *pdd;
struct queue_properties q_properties;
struct queue *q;
struct process_queue_node *pqn;
struct kernel_queue *kq;
BUG_ON(!pqm || !dev || !properties || !qid);
memset(&q_properties, 0, sizeof(struct queue_properties));
memcpy(&q_properties, properties, sizeof(struct queue_properties));
q = NULL;
kq = NULL;
pdd = kfd_get_process_device_data(dev, pqm->process, 1);
BUG_ON(!pdd);
retval = find_available_queue_slot(pqm, qid);
if (retval != 0)
return retval;
if (list_empty(&pqm->queues)) {
pdd->qpd.pqm = pqm;
dev->dqm->register_process(dev->dqm, &pdd->qpd);
}
pqn = kzalloc(sizeof(struct process_queue_node), GFP_KERNEL);
if (!pqn) {
retval = -ENOMEM;
goto err_allocate_pqn;
}
switch (type) {
case KFD_QUEUE_TYPE_COMPUTE:
/* check if there is over subscription */
if ((sched_policy == KFD_SCHED_POLICY_HWS_NO_OVERSUBSCRIPTION) &&
((dev->dqm->processes_count >= VMID_PER_DEVICE) ||
(dev->dqm->queue_count >= PIPE_PER_ME_CP_SCHEDULING * QUEUES_PER_PIPE))) {
pr_err("kfd: over-subscription is not allowed in radeon_kfd.sched_policy == 1\n");
retval = -EPERM;
goto err_create_queue;
}
retval = create_cp_queue(pqm, dev, &q, &q_properties, f, *qid);
if (retval != 0)
goto err_create_queue;
pqn->q = q;
pqn->kq = NULL;
retval = dev->dqm->create_queue(dev->dqm, q, &pdd->qpd,
&q->properties.vmid);
print_queue(q);
break;
case KFD_QUEUE_TYPE_DIQ:
kq = kernel_queue_init(dev, KFD_QUEUE_TYPE_DIQ);
if (kq == NULL) {
retval = -ENOMEM;
goto err_create_queue;
}
kq->queue->properties.queue_id = *qid;
pqn->kq = kq;
pqn->q = NULL;
retval = dev->dqm->create_kernel_queue(dev->dqm, kq, &pdd->qpd);
break;
default:
BUG();
break;
}
if (retval != 0) {
pr_err("kfd: error dqm create queue\n");
goto err_create_queue;
}
pr_debug("kfd: PQM After DQM create queue\n");
list_add(&pqn->process_queue_list, &pqm->queues);
if (q) {
*properties = q->properties;
pr_debug("kfd: PQM done creating queue\n");
print_queue_properties(properties);
}
return retval;
err_create_queue:
kfree(pqn);
err_allocate_pqn:
clear_bit(*qid, pqm->queue_slot_bitmap);
return retval;
}
int pqm_destroy_queue(struct process_queue_manager *pqm, unsigned int qid)
{
struct process_queue_node *pqn;
struct kfd_process_device *pdd;
struct device_queue_manager *dqm;
struct kfd_dev *dev;
int retval;
dqm = NULL;
BUG_ON(!pqm);
retval = 0;
pr_debug("kfd: In Func %s\n", __func__);
pqn = get_queue_by_qid(pqm, qid);
if (pqn == NULL) {
pr_err("kfd: queue id does not match any known queue\n");
return -EINVAL;
}
dev = NULL;
if (pqn->kq)
dev = pqn->kq->dev;
if (pqn->q)
dev = pqn->q->device;
BUG_ON(!dev);
pdd = kfd_get_process_device_data(dev, pqm->process, 1);
BUG_ON(!pdd);
if (pqn->kq) {
/* destroy kernel queue (DIQ) */
dqm = pqn->kq->dev->dqm;
dqm->destroy_kernel_queue(dqm, pqn->kq, &pdd->qpd);
kernel_queue_uninit(pqn->kq);
}
if (pqn->q) {
dqm = pqn->q->device->dqm;
retval = dqm->destroy_queue(dqm, &pdd->qpd, pqn->q);
if (retval != 0)
return retval;
uninit_queue(pqn->q);
}
list_del(&pqn->process_queue_list);
kfree(pqn);
clear_bit(qid, pqm->queue_slot_bitmap);
if (list_empty(&pqm->queues))
dqm->unregister_process(dqm, &pdd->qpd);
return retval;
}
int pqm_update_queue(struct process_queue_manager *pqm, unsigned int qid,
struct queue_properties *p)
{
int retval;
struct process_queue_node *pqn;
BUG_ON(!pqm);
pqn = get_queue_by_qid(pqm, qid);
BUG_ON(!pqn);
pqn->q->properties.queue_address = p->queue_address;
pqn->q->properties.queue_size = p->queue_size;
pqn->q->properties.queue_percent = p->queue_percent;
pqn->q->properties.priority = p->priority;
retval = pqn->q->device->dqm->update_queue(pqn->q->device->dqm, pqn->q);
if (retval != 0)
return retval;
return 0;
}
static __attribute__((unused)) struct kernel_queue *pqm_get_kernel_queue(
struct process_queue_manager *pqm,
unsigned int qid)
{
struct process_queue_node *pqn;
BUG_ON(!pqm);
pqn = get_queue_by_qid(pqm, qid);
if (pqn && pqn->kq)
return pqn->kq;
return NULL;
}

Просмотреть файл

@ -0,0 +1,85 @@
/*
* Copyright 2014 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
*/
#include <linux/slab.h>
#include "kfd_priv.h"
void print_queue_properties(struct queue_properties *q)
{
if (!q)
return;
pr_debug("Printing queue properties:\n");
pr_debug("Queue Type: %u\n", q->type);
pr_debug("Queue Size: %llu\n", q->queue_size);
pr_debug("Queue percent: %u\n", q->queue_percent);
pr_debug("Queue Address: 0x%llX\n", q->queue_address);
pr_debug("Queue Id: %u\n", q->queue_id);
pr_debug("Queue Process Vmid: %u\n", q->vmid);
pr_debug("Queue Read Pointer: 0x%p\n", q->read_ptr);
pr_debug("Queue Write Pointer: 0x%p\n", q->write_ptr);
pr_debug("Queue Doorbell Pointer: 0x%p\n", q->doorbell_ptr);
pr_debug("Queue Doorbell Offset: %u\n", q->doorbell_off);
}
void print_queue(struct queue *q)
{
if (!q)
return;
pr_debug("Printing queue:\n");
pr_debug("Queue Type: %u\n", q->properties.type);
pr_debug("Queue Size: %llu\n", q->properties.queue_size);
pr_debug("Queue percent: %u\n", q->properties.queue_percent);
pr_debug("Queue Address: 0x%llX\n", q->properties.queue_address);
pr_debug("Queue Id: %u\n", q->properties.queue_id);
pr_debug("Queue Process Vmid: %u\n", q->properties.vmid);
pr_debug("Queue Read Pointer: 0x%p\n", q->properties.read_ptr);
pr_debug("Queue Write Pointer: 0x%p\n", q->properties.write_ptr);
pr_debug("Queue Doorbell Pointer: 0x%p\n", q->properties.doorbell_ptr);
pr_debug("Queue Doorbell Offset: %u\n", q->properties.doorbell_off);
pr_debug("Queue MQD Address: 0x%p\n", q->mqd);
pr_debug("Queue MQD Gart: 0x%llX\n", q->gart_mqd_addr);
pr_debug("Queue Process Address: 0x%p\n", q->process);
pr_debug("Queue Device Address: 0x%p\n", q->device);
}
int init_queue(struct queue **q, struct queue_properties properties)
{
struct queue *tmp;
BUG_ON(!q);
tmp = kzalloc(sizeof(struct queue), GFP_KERNEL);
if (!tmp)
return -ENOMEM;
memcpy(&tmp->properties, &properties, sizeof(struct queue_properties));
*q = tmp;
return 0;
}
void uninit_queue(struct queue *q)
{
kfree(q);
}

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -0,0 +1,168 @@
/*
* Copyright 2014 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*/
#ifndef __KFD_TOPOLOGY_H__
#define __KFD_TOPOLOGY_H__
#include <linux/types.h>
#include <linux/list.h>
#include "kfd_priv.h"
#define KFD_TOPOLOGY_PUBLIC_NAME_SIZE 128
#define HSA_CAP_HOT_PLUGGABLE 0x00000001
#define HSA_CAP_ATS_PRESENT 0x00000002
#define HSA_CAP_SHARED_WITH_GRAPHICS 0x00000004
#define HSA_CAP_QUEUE_SIZE_POW2 0x00000008
#define HSA_CAP_QUEUE_SIZE_32BIT 0x00000010
#define HSA_CAP_QUEUE_IDLE_EVENT 0x00000020
#define HSA_CAP_VA_LIMIT 0x00000040
#define HSA_CAP_WATCH_POINTS_SUPPORTED 0x00000080
#define HSA_CAP_WATCH_POINTS_TOTALBITS_MASK 0x00000f00
#define HSA_CAP_WATCH_POINTS_TOTALBITS_SHIFT 8
#define HSA_CAP_RESERVED 0xfffff000
struct kfd_node_properties {
uint32_t cpu_cores_count;
uint32_t simd_count;
uint32_t mem_banks_count;
uint32_t caches_count;
uint32_t io_links_count;
uint32_t cpu_core_id_base;
uint32_t simd_id_base;
uint32_t capability;
uint32_t max_waves_per_simd;
uint32_t lds_size_in_kb;
uint32_t gds_size_in_kb;
uint32_t wave_front_size;
uint32_t array_count;
uint32_t simd_arrays_per_engine;
uint32_t cu_per_simd_array;
uint32_t simd_per_cu;
uint32_t max_slots_scratch_cu;
uint32_t engine_id;
uint32_t vendor_id;
uint32_t device_id;
uint32_t location_id;
uint32_t max_engine_clk_fcompute;
uint32_t max_engine_clk_ccompute;
uint16_t marketing_name[KFD_TOPOLOGY_PUBLIC_NAME_SIZE];
};
#define HSA_MEM_HEAP_TYPE_SYSTEM 0
#define HSA_MEM_HEAP_TYPE_FB_PUBLIC 1
#define HSA_MEM_HEAP_TYPE_FB_PRIVATE 2
#define HSA_MEM_HEAP_TYPE_GPU_GDS 3
#define HSA_MEM_HEAP_TYPE_GPU_LDS 4
#define HSA_MEM_HEAP_TYPE_GPU_SCRATCH 5
#define HSA_MEM_FLAGS_HOT_PLUGGABLE 0x00000001
#define HSA_MEM_FLAGS_NON_VOLATILE 0x00000002
#define HSA_MEM_FLAGS_RESERVED 0xfffffffc
struct kfd_mem_properties {
struct list_head list;
uint32_t heap_type;
uint64_t size_in_bytes;
uint32_t flags;
uint32_t width;
uint32_t mem_clk_max;
struct kobject *kobj;
struct attribute attr;
};
#define KFD_TOPOLOGY_CPU_SIBLINGS 256
#define HSA_CACHE_TYPE_DATA 0x00000001
#define HSA_CACHE_TYPE_INSTRUCTION 0x00000002
#define HSA_CACHE_TYPE_CPU 0x00000004
#define HSA_CACHE_TYPE_HSACU 0x00000008
#define HSA_CACHE_TYPE_RESERVED 0xfffffff0
struct kfd_cache_properties {
struct list_head list;
uint32_t processor_id_low;
uint32_t cache_level;
uint32_t cache_size;
uint32_t cacheline_size;
uint32_t cachelines_per_tag;
uint32_t cache_assoc;
uint32_t cache_latency;
uint32_t cache_type;
uint8_t sibling_map[KFD_TOPOLOGY_CPU_SIBLINGS];
struct kobject *kobj;
struct attribute attr;
};
struct kfd_iolink_properties {
struct list_head list;
uint32_t iolink_type;
uint32_t ver_maj;
uint32_t ver_min;
uint32_t node_from;
uint32_t node_to;
uint32_t weight;
uint32_t min_latency;
uint32_t max_latency;
uint32_t min_bandwidth;
uint32_t max_bandwidth;
uint32_t rec_transfer_size;
uint32_t flags;
struct kobject *kobj;
struct attribute attr;
};
struct kfd_topology_device {
struct list_head list;
uint32_t gpu_id;
struct kfd_node_properties node_props;
uint32_t mem_bank_count;
struct list_head mem_props;
uint32_t cache_count;
struct list_head cache_props;
uint32_t io_link_count;
struct list_head io_link_props;
struct kfd_dev *gpu;
struct kobject *kobj_node;
struct kobject *kobj_mem;
struct kobject *kobj_cache;
struct kobject *kobj_iolink;
struct attribute attr_gpuid;
struct attribute attr_name;
struct attribute attr_props;
};
struct kfd_system_properties {
uint32_t num_devices; /* Number of H-NUMA nodes */
uint32_t generation_count;
uint64_t platform_oem;
uint64_t platform_id;
uint64_t platform_rev;
struct kobject *kobj_topology;
struct kobject *kobj_nodes;
struct attribute attr_genid;
struct attribute attr_props;
};
#endif /* __KFD_TOPOLOGY_H__ */

Просмотреть файл

@ -0,0 +1,185 @@
/*
* Copyright 2014 Advanced Micro Devices, Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*/
/*
* This file defines the private interface between the
* AMD kernel graphics drivers and the AMD KFD.
*/
#ifndef KGD_KFD_INTERFACE_H_INCLUDED
#define KGD_KFD_INTERFACE_H_INCLUDED
#include <linux/types.h>
struct pci_dev;
#define KFD_INTERFACE_VERSION 1
struct kfd_dev;
struct kgd_dev;
struct kgd_mem;
enum kgd_memory_pool {
KGD_POOL_SYSTEM_CACHEABLE = 1,
KGD_POOL_SYSTEM_WRITECOMBINE = 2,
KGD_POOL_FRAMEBUFFER = 3,
};
struct kgd2kfd_shared_resources {
/* Bit n == 1 means VMID n is available for KFD. */
unsigned int compute_vmid_bitmap;
/* Compute pipes are counted starting from MEC0/pipe0 as 0. */
unsigned int first_compute_pipe;
/* Number of MEC pipes available for KFD. */
unsigned int compute_pipe_count;
/* Base address of doorbell aperture. */
phys_addr_t doorbell_physical_address;
/* Size in bytes of doorbell aperture. */
size_t doorbell_aperture_size;
/* Number of bytes at start of aperture reserved for KGD. */
size_t doorbell_start_offset;
};
/**
* struct kgd2kfd_calls
*
* @exit: Notifies amdkfd that kgd module is unloaded
*
* @probe: Notifies amdkfd about a probe done on a device in the kgd driver.
*
* @device_init: Initialize the newly probed device (if it is a device that
* amdkfd supports)
*
* @device_exit: Notifies amdkfd about a removal of a kgd device
*
* @suspend: Notifies amdkfd about a suspend action done to a kgd device
*
* @resume: Notifies amdkfd about a resume action done to a kgd device
*
* This structure contains function callback pointers so the kgd driver
* will notify to the amdkfd about certain status changes.
*
*/
struct kgd2kfd_calls {
void (*exit)(void);
struct kfd_dev* (*probe)(struct kgd_dev *kgd, struct pci_dev *pdev);
bool (*device_init)(struct kfd_dev *kfd,
const struct kgd2kfd_shared_resources *gpu_resources);
void (*device_exit)(struct kfd_dev *kfd);
void (*interrupt)(struct kfd_dev *kfd, const void *ih_ring_entry);
void (*suspend)(struct kfd_dev *kfd);
int (*resume)(struct kfd_dev *kfd);
};
/**
* struct kfd2kgd_calls
*
* @init_sa_manager: Initialize an instance of the sa manager, used by
* amdkfd for all system memory allocations that are mapped to the GART
* address space
*
* @fini_sa_manager: Releases all memory allocations for amdkfd that are
* handled by kgd sa manager
*
* @allocate_mem: Allocate a buffer from amdkfd's sa manager. The buffer can
* be used for mqds, hpds, kernel queue, fence and runlists
*
* @free_mem: Frees a buffer that was allocated by amdkfd's sa manager
*
* @get_vmem_size: Retrieves (physical) size of VRAM
*
* @get_gpu_clock_counter: Retrieves GPU clock counter
*
* @get_max_engine_clock_in_mhz: Retrieves maximum GPU clock in MHz
*
* @program_sh_mem_settings: A function that should initiate the memory
* properties such as main aperture memory type (cache / non cached) and
* secondary aperture base address, size and memory type.
* This function is used only for no cp scheduling mode.
*
* @set_pasid_vmid_mapping: Exposes pasid/vmid pair to the H/W for no cp
* scheduling mode. Only used for no cp scheduling mode.
*
* @init_memory: Initializes memory apertures to fixed base/limit address
* and non cached memory types.
*
* @init_pipeline: Initialized the compute pipelines.
*
* @hqd_load: Loads the mqd structure to a H/W hqd slot. used only for no cp
* sceduling mode.
*
* @hqd_is_occupies: Checks if a hqd slot is occupied.
*
* @hqd_destroy: Destructs and preempts the queue assigned to that hqd slot.
*
* This structure contains function pointers to services that the kgd driver
* provides to amdkfd driver.
*
*/
struct kfd2kgd_calls {
/* Memory management. */
int (*init_sa_manager)(struct kgd_dev *kgd, unsigned int size);
void (*fini_sa_manager)(struct kgd_dev *kgd);
int (*allocate_mem)(struct kgd_dev *kgd, size_t size, size_t alignment,
enum kgd_memory_pool pool, struct kgd_mem **mem);
void (*free_mem)(struct kgd_dev *kgd, struct kgd_mem *mem);
uint64_t (*get_vmem_size)(struct kgd_dev *kgd);
uint64_t (*get_gpu_clock_counter)(struct kgd_dev *kgd);
uint32_t (*get_max_engine_clock_in_mhz)(struct kgd_dev *kgd);
/* Register access functions */
void (*program_sh_mem_settings)(struct kgd_dev *kgd, uint32_t vmid,
uint32_t sh_mem_config, uint32_t sh_mem_ape1_base,
uint32_t sh_mem_ape1_limit, uint32_t sh_mem_bases);
int (*set_pasid_vmid_mapping)(struct kgd_dev *kgd, unsigned int pasid,
unsigned int vmid);
int (*init_memory)(struct kgd_dev *kgd);
int (*init_pipeline)(struct kgd_dev *kgd, uint32_t pipe_id,
uint32_t hpd_size, uint64_t hpd_gpu_addr);
int (*hqd_load)(struct kgd_dev *kgd, void *mqd, uint32_t pipe_id,
uint32_t queue_id, uint32_t __user *wptr);
bool (*hqd_is_occupies)(struct kgd_dev *kgd, uint64_t queue_address,
uint32_t pipe_id, uint32_t queue_id);
int (*hqd_destroy)(struct kgd_dev *kgd, uint32_t reset_type,
unsigned int timeout, uint32_t pipe_id,
uint32_t queue_id);
};
bool kgd2kfd_init(unsigned interface_version,
const struct kfd2kgd_calls *f2g,
const struct kgd2kfd_calls **g2f);
#endif /* KGD_KFD_INTERFACE_H_INCLUDED */

Просмотреть файл

@ -12,6 +12,7 @@
#include <linux/platform_device.h>
#include <drm/drmP.h>
#include <drm/drm_crtc_helper.h>
#include <drm/drm_plane_helper.h>
#include "armada_crtc.h"
#include "armada_drm.h"
#include "armada_fb.h"

Просмотреть файл

@ -31,6 +31,7 @@
#include <drm/drmP.h>
#include <drm/drm_crtc.h>
#include <drm/drm_crtc_helper.h>
#include <drm/drm_plane_helper.h>
#include "ast_drv.h"
#include "ast_tables.h"

Просмотреть файл

@ -9,6 +9,17 @@
/* ---------------------------------------------------------------------- */
static int bochsfb_mmap(struct fb_info *info,
struct vm_area_struct *vma)
{
struct drm_fb_helper *fb_helper = info->par;
struct bochs_device *bochs =
container_of(fb_helper, struct bochs_device, fb.helper);
struct bochs_bo *bo = gem_to_bochs_bo(bochs->fb.gfb.obj);
return ttm_fbdev_mmap(vma, &bo->bo);
}
static struct fb_ops bochsfb_ops = {
.owner = THIS_MODULE,
.fb_check_var = drm_fb_helper_check_var,
@ -19,6 +30,7 @@ static struct fb_ops bochsfb_ops = {
.fb_pan_display = drm_fb_helper_pan_display,
.fb_blank = drm_fb_helper_blank,
.fb_setcmap = drm_fb_helper_setcmap,
.fb_mmap = bochsfb_mmap,
};
static int bochsfb_create_object(struct bochs_device *bochs,
@ -123,11 +135,9 @@ static int bochsfb_create(struct drm_fb_helper *helper,
info->screen_base = bo->kmap.virtual;
info->screen_size = size;
#if 0
/* FIXME: get this right for mmap(/dev/fb0) */
info->fix.smem_start = bochs_bo_mmap_offset(bo);
drm_vma_offset_remove(&bo->bo.bdev->vma_manager, &bo->bo.vma_node);
info->fix.smem_start = 0;
info->fix.smem_len = size;
#endif
ret = fb_alloc_cmap(&info->cmap, 256, 0);
if (ret) {

Просмотреть файл

@ -51,11 +51,10 @@ int bochs_hw_init(struct drm_device *dev, uint32_t flags)
{
struct bochs_device *bochs = dev->dev_private;
struct pci_dev *pdev = dev->pdev;
unsigned long addr, size, mem, ioaddr, iosize;
unsigned long addr, size, mem, ioaddr, iosize, qext_size;
u16 id;
if (/* (ent->driver_data == BOCHS_QEMU_STDVGA) && */
(pdev->resource[2].flags & IORESOURCE_MEM)) {
if (pdev->resource[2].flags & IORESOURCE_MEM) {
/* mmio bar with vga and bochs registers present */
if (pci_request_region(pdev, 2, "bochs-drm") != 0) {
DRM_ERROR("Cannot request mmio region\n");
@ -116,6 +115,24 @@ int bochs_hw_init(struct drm_device *dev, uint32_t flags)
size / 1024, addr,
bochs->ioports ? "ioports" : "mmio",
ioaddr);
if (bochs->mmio && pdev->revision >= 2) {
qext_size = readl(bochs->mmio + 0x600);
if (qext_size < 4 || qext_size > iosize)
goto noext;
DRM_DEBUG("Found qemu ext regs, size %ld\n", qext_size);
if (qext_size >= 8) {
#ifdef __BIG_ENDIAN
writel(0xbebebebe, bochs->mmio + 0x604);
#else
writel(0x1e1e1e1e, bochs->mmio + 0x604);
#endif
DRM_DEBUG(" qext endian: 0x%x\n",
readl(bochs->mmio + 0x604));
}
}
noext:
return 0;
}

Просмотреть файл

@ -6,6 +6,7 @@
*/
#include "bochs.h"
#include <drm/drm_plane_helper.h>
static int defx = 1024;
static int defy = 768;
@ -108,11 +109,32 @@ static void bochs_crtc_gamma_set(struct drm_crtc *crtc, u16 *red, u16 *green,
{
}
static int bochs_crtc_page_flip(struct drm_crtc *crtc,
struct drm_framebuffer *fb,
struct drm_pending_vblank_event *event,
uint32_t page_flip_flags)
{
struct bochs_device *bochs =
container_of(crtc, struct bochs_device, crtc);
struct drm_framebuffer *old_fb = crtc->primary->fb;
unsigned long irqflags;
crtc->primary->fb = fb;
bochs_crtc_mode_set_base(crtc, 0, 0, old_fb);
if (event) {
spin_lock_irqsave(&bochs->dev->event_lock, irqflags);
drm_send_vblank_event(bochs->dev, -1, event);
spin_unlock_irqrestore(&bochs->dev->event_lock, irqflags);
}
return 0;
}
/* These provide the minimum set of functions required to handle a CRTC */
static const struct drm_crtc_funcs bochs_crtc_funcs = {
.gamma_set = bochs_crtc_gamma_set,
.set_config = drm_crtc_helper_set_config,
.destroy = drm_crtc_cleanup,
.page_flip = bochs_crtc_page_flip,
};
static const struct drm_crtc_helper_funcs bochs_helper_funcs = {

Просмотреть файл

@ -210,6 +210,9 @@ int cirrus_framebuffer_init(struct drm_device *dev,
struct drm_mode_fb_cmd2 *mode_cmd,
struct drm_gem_object *obj);
bool cirrus_check_framebuffer(struct cirrus_device *cdev, int width, int height,
int bpp, int pitch);
/* cirrus_display.c */
int cirrus_modeset_init(struct cirrus_device *cdev);
void cirrus_modeset_fini(struct cirrus_device *cdev);

Просмотреть файл

@ -139,6 +139,7 @@ static int cirrusfb_create_object(struct cirrus_fbdev *afbdev,
struct drm_gem_object **gobj_p)
{
struct drm_device *dev = afbdev->helper.dev;
struct cirrus_device *cdev = dev->dev_private;
u32 bpp, depth;
u32 size;
struct drm_gem_object *gobj;
@ -146,8 +147,10 @@ static int cirrusfb_create_object(struct cirrus_fbdev *afbdev,
int ret = 0;
drm_fb_get_bpp_depth(mode_cmd->pixel_format, &depth, &bpp);
if (bpp > 24)
if (!cirrus_check_framebuffer(cdev, mode_cmd->width, mode_cmd->height,
bpp, mode_cmd->pitches[0]))
return -EINVAL;
size = mode_cmd->pitches[0] * mode_cmd->height;
ret = cirrus_gem_create(dev, size, true, &gobj);
if (ret)

Просмотреть файл

@ -49,14 +49,16 @@ cirrus_user_framebuffer_create(struct drm_device *dev,
struct drm_file *filp,
struct drm_mode_fb_cmd2 *mode_cmd)
{
struct cirrus_device *cdev = dev->dev_private;
struct drm_gem_object *obj;
struct cirrus_framebuffer *cirrus_fb;
int ret;
u32 bpp, depth;
drm_fb_get_bpp_depth(mode_cmd->pixel_format, &depth, &bpp);
/* cirrus can't handle > 24bpp framebuffers at all */
if (bpp > 24)
if (!cirrus_check_framebuffer(cdev, mode_cmd->width, mode_cmd->height,
bpp, mode_cmd->pitches[0]))
return ERR_PTR(-EINVAL);
obj = drm_gem_object_lookup(dev, filp, mode_cmd->handles[0]);
@ -96,8 +98,7 @@ static int cirrus_vram_init(struct cirrus_device *cdev)
{
/* BAR 0 is VRAM */
cdev->mc.vram_base = pci_resource_start(cdev->dev->pdev, 0);
/* We have 4MB of VRAM */
cdev->mc.vram_size = 4 * 1024 * 1024;
cdev->mc.vram_size = pci_resource_len(cdev->dev->pdev, 0);
if (!request_mem_region(cdev->mc.vram_base, cdev->mc.vram_size,
"cirrusdrmfb_vram")) {
@ -179,17 +180,22 @@ int cirrus_driver_load(struct drm_device *dev, unsigned long flags)
}
r = cirrus_mm_init(cdev);
if (r)
if (r) {
dev_err(&dev->pdev->dev, "fatal err on mm init\n");
goto out;
}
r = cirrus_modeset_init(cdev);
if (r)
if (r) {
dev_err(&dev->pdev->dev, "Fatal error during modeset init: %d\n", r);
goto out;
}
dev->mode_config.funcs = (void *)&cirrus_mode_funcs;
return 0;
out:
if (r)
cirrus_driver_unload(dev);
cirrus_driver_unload(dev);
return r;
}
@ -307,3 +313,21 @@ out_unlock:
return ret;
}
bool cirrus_check_framebuffer(struct cirrus_device *cdev, int width, int height,
int bpp, int pitch)
{
const int max_pitch = 0x1FF << 3; /* (4096 - 1) & ~111b bytes */
const int max_size = cdev->mc.vram_size;
if (bpp > 32)
return false;
if (pitch > max_pitch)
return false;
if (pitch * height > max_size)
return false;
return true;
}

Просмотреть файл

@ -16,6 +16,7 @@
*/
#include <drm/drmP.h>
#include <drm/drm_crtc_helper.h>
#include <drm/drm_plane_helper.h>
#include <video/cirrus.h>

Просмотреть файл

@ -0,0 +1,657 @@
/*
* Copyright (C) 2014 Red Hat
* Copyright (C) 2014 Intel Corp.
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
* OTHER DEALINGS IN THE SOFTWARE.
*
* Authors:
* Rob Clark <robdclark@gmail.com>
* Daniel Vetter <daniel.vetter@ffwll.ch>
*/
#include <drm/drmP.h>
#include <drm/drm_atomic.h>
#include <drm/drm_plane_helper.h>
static void kfree_state(struct drm_atomic_state *state)
{
kfree(state->connectors);
kfree(state->connector_states);
kfree(state->crtcs);
kfree(state->crtc_states);
kfree(state->planes);
kfree(state->plane_states);
kfree(state);
}
/**
* drm_atomic_state_alloc - allocate atomic state
* @dev: DRM device
*
* This allocates an empty atomic state to track updates.
*/
struct drm_atomic_state *
drm_atomic_state_alloc(struct drm_device *dev)
{
struct drm_atomic_state *state;
state = kzalloc(sizeof(*state), GFP_KERNEL);
if (!state)
return NULL;
state->num_connector = ACCESS_ONCE(dev->mode_config.num_connector);
state->crtcs = kcalloc(dev->mode_config.num_crtc,
sizeof(*state->crtcs), GFP_KERNEL);
if (!state->crtcs)
goto fail;
state->crtc_states = kcalloc(dev->mode_config.num_crtc,
sizeof(*state->crtc_states), GFP_KERNEL);
if (!state->crtc_states)
goto fail;
state->planes = kcalloc(dev->mode_config.num_total_plane,
sizeof(*state->planes), GFP_KERNEL);
if (!state->planes)
goto fail;
state->plane_states = kcalloc(dev->mode_config.num_total_plane,
sizeof(*state->plane_states), GFP_KERNEL);
if (!state->plane_states)
goto fail;
state->connectors = kcalloc(state->num_connector,
sizeof(*state->connectors),
GFP_KERNEL);
if (!state->connectors)
goto fail;
state->connector_states = kcalloc(state->num_connector,
sizeof(*state->connector_states),
GFP_KERNEL);
if (!state->connector_states)
goto fail;
state->dev = dev;
DRM_DEBUG_KMS("Allocate atomic state %p\n", state);
return state;
fail:
kfree_state(state);
return NULL;
}
EXPORT_SYMBOL(drm_atomic_state_alloc);
/**
* drm_atomic_state_clear - clear state object
* @state: atomic state
*
* When the w/w mutex algorithm detects a deadlock we need to back off and drop
* all locks. So someone else could sneak in and change the current modeset
* configuration. Which means that all the state assembled in @state is no
* longer an atomic update to the current state, but to some arbitrary earlier
* state. Which could break assumptions the driver's ->atomic_check likely
* relies on.
*
* Hence we must clear all cached state and completely start over, using this
* function.
*/
void drm_atomic_state_clear(struct drm_atomic_state *state)
{
struct drm_device *dev = state->dev;
struct drm_mode_config *config = &dev->mode_config;
int i;
DRM_DEBUG_KMS("Clearing atomic state %p\n", state);
for (i = 0; i < state->num_connector; i++) {
struct drm_connector *connector = state->connectors[i];
if (!connector)
continue;
WARN_ON(!drm_modeset_is_locked(&config->connection_mutex));
connector->funcs->atomic_destroy_state(connector,
state->connector_states[i]);
}
for (i = 0; i < config->num_crtc; i++) {
struct drm_crtc *crtc = state->crtcs[i];
if (!crtc)
continue;
crtc->funcs->atomic_destroy_state(crtc,
state->crtc_states[i]);
}
for (i = 0; i < config->num_total_plane; i++) {
struct drm_plane *plane = state->planes[i];
if (!plane)
continue;
plane->funcs->atomic_destroy_state(plane,
state->plane_states[i]);
}
}
EXPORT_SYMBOL(drm_atomic_state_clear);
/**
* drm_atomic_state_free - free all memory for an atomic state
* @state: atomic state to deallocate
*
* This frees all memory associated with an atomic state, including all the
* per-object state for planes, crtcs and connectors.
*/
void drm_atomic_state_free(struct drm_atomic_state *state)
{
drm_atomic_state_clear(state);
DRM_DEBUG_KMS("Freeing atomic state %p\n", state);
kfree_state(state);
}
EXPORT_SYMBOL(drm_atomic_state_free);
/**
* drm_atomic_get_crtc_state - get crtc state
* @state: global atomic state object
* @crtc: crtc to get state object for
*
* This function returns the crtc state for the given crtc, allocating it if
* needed. It will also grab the relevant crtc lock to make sure that the state
* is consistent.
*
* Returns:
*
* Either the allocated state or the error code encoded into the pointer. When
* the error is EDEADLK then the w/w mutex code has detected a deadlock and the
* entire atomic sequence must be restarted. All other errors are fatal.
*/
struct drm_crtc_state *
drm_atomic_get_crtc_state(struct drm_atomic_state *state,
struct drm_crtc *crtc)
{
int ret, index;
struct drm_crtc_state *crtc_state;
index = drm_crtc_index(crtc);
if (state->crtc_states[index])
return state->crtc_states[index];
ret = drm_modeset_lock(&crtc->mutex, state->acquire_ctx);
if (ret)
return ERR_PTR(ret);
crtc_state = crtc->funcs->atomic_duplicate_state(crtc);
if (!crtc_state)
return ERR_PTR(-ENOMEM);
state->crtc_states[index] = crtc_state;
state->crtcs[index] = crtc;
crtc_state->state = state;
DRM_DEBUG_KMS("Added [CRTC:%d] %p state to %p\n",
crtc->base.id, crtc_state, state);
return crtc_state;
}
EXPORT_SYMBOL(drm_atomic_get_crtc_state);
/**
* drm_atomic_get_plane_state - get plane state
* @state: global atomic state object
* @plane: plane to get state object for
*
* This function returns the plane state for the given plane, allocating it if
* needed. It will also grab the relevant plane lock to make sure that the state
* is consistent.
*
* Returns:
*
* Either the allocated state or the error code encoded into the pointer. When
* the error is EDEADLK then the w/w mutex code has detected a deadlock and the
* entire atomic sequence must be restarted. All other errors are fatal.
*/
struct drm_plane_state *
drm_atomic_get_plane_state(struct drm_atomic_state *state,
struct drm_plane *plane)
{
int ret, index;
struct drm_plane_state *plane_state;
index = drm_plane_index(plane);
if (state->plane_states[index])
return state->plane_states[index];
ret = drm_modeset_lock(&plane->mutex, state->acquire_ctx);
if (ret)
return ERR_PTR(ret);
plane_state = plane->funcs->atomic_duplicate_state(plane);
if (!plane_state)
return ERR_PTR(-ENOMEM);
state->plane_states[index] = plane_state;
state->planes[index] = plane;
plane_state->state = state;
DRM_DEBUG_KMS("Added [PLANE:%d] %p state to %p\n",
plane->base.id, plane_state, state);
if (plane_state->crtc) {
struct drm_crtc_state *crtc_state;
crtc_state = drm_atomic_get_crtc_state(state,
plane_state->crtc);
if (IS_ERR(crtc_state))
return ERR_CAST(crtc_state);
}
return plane_state;
}
EXPORT_SYMBOL(drm_atomic_get_plane_state);
/**
* drm_atomic_get_connector_state - get connector state
* @state: global atomic state object
* @connector: connector to get state object for
*
* This function returns the connector state for the given connector,
* allocating it if needed. It will also grab the relevant connector lock to
* make sure that the state is consistent.
*
* Returns:
*
* Either the allocated state or the error code encoded into the pointer. When
* the error is EDEADLK then the w/w mutex code has detected a deadlock and the
* entire atomic sequence must be restarted. All other errors are fatal.
*/
struct drm_connector_state *
drm_atomic_get_connector_state(struct drm_atomic_state *state,
struct drm_connector *connector)
{
int ret, index;
struct drm_mode_config *config = &connector->dev->mode_config;
struct drm_connector_state *connector_state;
ret = drm_modeset_lock(&config->connection_mutex, state->acquire_ctx);
if (ret)
return ERR_PTR(ret);
index = drm_connector_index(connector);
/*
* Construction of atomic state updates can race with a connector
* hot-add which might overflow. In this case flip the table and just
* restart the entire ioctl - no one is fast enough to livelock a cpu
* with physical hotplug events anyway.
*
* Note that we only grab the indexes once we have the right lock to
* prevent hotplug/unplugging of connectors. So removal is no problem,
* at most the array is a bit too large.
*/
if (index >= state->num_connector) {
DRM_DEBUG_KMS("Hot-added connector would overflow state array, restarting\n");
return ERR_PTR(-EAGAIN);
}
if (state->connector_states[index])
return state->connector_states[index];
connector_state = connector->funcs->atomic_duplicate_state(connector);
if (!connector_state)
return ERR_PTR(-ENOMEM);
state->connector_states[index] = connector_state;
state->connectors[index] = connector;
connector_state->state = state;
DRM_DEBUG_KMS("Added [CONNECTOR:%d] %p state to %p\n",
connector->base.id, connector_state, state);
if (connector_state->crtc) {
struct drm_crtc_state *crtc_state;
crtc_state = drm_atomic_get_crtc_state(state,
connector_state->crtc);
if (IS_ERR(crtc_state))
return ERR_CAST(crtc_state);
}
return connector_state;
}
EXPORT_SYMBOL(drm_atomic_get_connector_state);
/**
* drm_atomic_set_crtc_for_plane - set crtc for plane
* @state: the incoming atomic state
* @plane: the plane whose incoming state to update
* @crtc: crtc to use for the plane
*
* Changing the assigned crtc for a plane requires us to grab the lock and state
* for the new crtc, as needed. This function takes care of all these details
* besides updating the pointer in the state object itself.
*
* Returns:
* 0 on success or can fail with -EDEADLK or -ENOMEM. When the error is EDEADLK
* then the w/w mutex code has detected a deadlock and the entire atomic
* sequence must be restarted. All other errors are fatal.
*/
int
drm_atomic_set_crtc_for_plane(struct drm_atomic_state *state,
struct drm_plane *plane, struct drm_crtc *crtc)
{
struct drm_plane_state *plane_state =
drm_atomic_get_plane_state(state, plane);
struct drm_crtc_state *crtc_state;
if (WARN_ON(IS_ERR(plane_state)))
return PTR_ERR(plane_state);
if (plane_state->crtc) {
crtc_state = drm_atomic_get_crtc_state(plane_state->state,
plane_state->crtc);
if (WARN_ON(IS_ERR(crtc_state)))
return PTR_ERR(crtc_state);
crtc_state->plane_mask &= ~(1 << drm_plane_index(plane));
}
plane_state->crtc = crtc;
if (crtc) {
crtc_state = drm_atomic_get_crtc_state(plane_state->state,
crtc);
if (IS_ERR(crtc_state))
return PTR_ERR(crtc_state);
crtc_state->plane_mask |= (1 << drm_plane_index(plane));
}
if (crtc)
DRM_DEBUG_KMS("Link plane state %p to [CRTC:%d]\n",
plane_state, crtc->base.id);
else
DRM_DEBUG_KMS("Link plane state %p to [NOCRTC]\n", plane_state);
return 0;
}
EXPORT_SYMBOL(drm_atomic_set_crtc_for_plane);
/**
* drm_atomic_set_fb_for_plane - set crtc for plane
* @plane_state: atomic state object for the plane
* @fb: fb to use for the plane
*
* Changing the assigned framebuffer for a plane requires us to grab a reference
* to the new fb and drop the reference to the old fb, if there is one. This
* function takes care of all these details besides updating the pointer in the
* state object itself.
*/
void
drm_atomic_set_fb_for_plane(struct drm_plane_state *plane_state,
struct drm_framebuffer *fb)
{
if (plane_state->fb)
drm_framebuffer_unreference(plane_state->fb);
if (fb)
drm_framebuffer_reference(fb);
plane_state->fb = fb;
if (fb)
DRM_DEBUG_KMS("Set [FB:%d] for plane state %p\n",
fb->base.id, plane_state);
else
DRM_DEBUG_KMS("Set [NOFB] for plane state %p\n", plane_state);
}
EXPORT_SYMBOL(drm_atomic_set_fb_for_plane);
/**
* drm_atomic_set_crtc_for_connector - set crtc for connector
* @conn_state: atomic state object for the connector
* @crtc: crtc to use for the connector
*
* Changing the assigned crtc for a connector requires us to grab the lock and
* state for the new crtc, as needed. This function takes care of all these
* details besides updating the pointer in the state object itself.
*
* Returns:
* 0 on success or can fail with -EDEADLK or -ENOMEM. When the error is EDEADLK
* then the w/w mutex code has detected a deadlock and the entire atomic
* sequence must be restarted. All other errors are fatal.
*/
int
drm_atomic_set_crtc_for_connector(struct drm_connector_state *conn_state,
struct drm_crtc *crtc)
{
struct drm_crtc_state *crtc_state;
if (crtc) {
crtc_state = drm_atomic_get_crtc_state(conn_state->state, crtc);
if (IS_ERR(crtc_state))
return PTR_ERR(crtc_state);
}
conn_state->crtc = crtc;
if (crtc)
DRM_DEBUG_KMS("Link connector state %p to [CRTC:%d]\n",
conn_state, crtc->base.id);
else
DRM_DEBUG_KMS("Link connector state %p to [NOCRTC]\n",
conn_state);
return 0;
}
EXPORT_SYMBOL(drm_atomic_set_crtc_for_connector);
/**
* drm_atomic_add_affected_connectors - add connectors for crtc
* @state: atomic state
* @crtc: DRM crtc
*
* This function walks the current configuration and adds all connectors
* currently using @crtc to the atomic configuration @state. Note that this
* function must acquire the connection mutex. This can potentially cause
* unneeded seralization if the update is just for the planes on one crtc. Hence
* drivers and helpers should only call this when really needed (e.g. when a
* full modeset needs to happen due to some change).
*
* Returns:
* 0 on success or can fail with -EDEADLK or -ENOMEM. When the error is EDEADLK
* then the w/w mutex code has detected a deadlock and the entire atomic
* sequence must be restarted. All other errors are fatal.
*/
int
drm_atomic_add_affected_connectors(struct drm_atomic_state *state,
struct drm_crtc *crtc)
{
struct drm_mode_config *config = &state->dev->mode_config;
struct drm_connector *connector;
struct drm_connector_state *conn_state;
int ret;
ret = drm_modeset_lock(&config->connection_mutex, state->acquire_ctx);
if (ret)
return ret;
DRM_DEBUG_KMS("Adding all current connectors for [CRTC:%d] to %p\n",
crtc->base.id, state);
/*
* Changed connectors are already in @state, so only need to look at the
* current configuration.
*/
list_for_each_entry(connector, &config->connector_list, head) {
if (connector->state->crtc != crtc)
continue;
conn_state = drm_atomic_get_connector_state(state, connector);
if (IS_ERR(conn_state))
return PTR_ERR(conn_state);
}
return 0;
}
EXPORT_SYMBOL(drm_atomic_add_affected_connectors);
/**
* drm_atomic_connectors_for_crtc - count number of connected outputs
* @state: atomic state
* @crtc: DRM crtc
*
* This function counts all connectors which will be connected to @crtc
* according to @state. Useful to recompute the enable state for @crtc.
*/
int
drm_atomic_connectors_for_crtc(struct drm_atomic_state *state,
struct drm_crtc *crtc)
{
int i, num_connected_connectors = 0;
for (i = 0; i < state->num_connector; i++) {
struct drm_connector_state *conn_state;
conn_state = state->connector_states[i];
if (conn_state && conn_state->crtc == crtc)
num_connected_connectors++;
}
DRM_DEBUG_KMS("State %p has %i connectors for [CRTC:%d]\n",
state, num_connected_connectors, crtc->base.id);
return num_connected_connectors;
}
EXPORT_SYMBOL(drm_atomic_connectors_for_crtc);
/**
* drm_atomic_legacy_backoff - locking backoff for legacy ioctls
* @state: atomic state
*
* This function should be used by legacy entry points which don't understand
* -EDEADLK semantics. For simplicity this one will grab all modeset locks after
* the slowpath completed.
*/
void drm_atomic_legacy_backoff(struct drm_atomic_state *state)
{
int ret;
retry:
drm_modeset_backoff(state->acquire_ctx);
ret = drm_modeset_lock(&state->dev->mode_config.connection_mutex,
state->acquire_ctx);
if (ret)
goto retry;
ret = drm_modeset_lock_all_crtcs(state->dev,
state->acquire_ctx);
if (ret)
goto retry;
}
EXPORT_SYMBOL(drm_atomic_legacy_backoff);
/**
* drm_atomic_check_only - check whether a given config would work
* @state: atomic configuration to check
*
* Note that this function can return -EDEADLK if the driver needed to acquire
* more locks but encountered a deadlock. The caller must then do the usual w/w
* backoff dance and restart. All other errors are fatal.
*
* Returns:
* 0 on success, negative error code on failure.
*/
int drm_atomic_check_only(struct drm_atomic_state *state)
{
struct drm_mode_config *config = &state->dev->mode_config;
DRM_DEBUG_KMS("checking %p\n", state);
if (config->funcs->atomic_check)
return config->funcs->atomic_check(state->dev, state);
else
return 0;
}
EXPORT_SYMBOL(drm_atomic_check_only);
/**
* drm_atomic_commit - commit configuration atomically
* @state: atomic configuration to check
*
* Note that this function can return -EDEADLK if the driver needed to acquire
* more locks but encountered a deadlock. The caller must then do the usual w/w
* backoff dance and restart. All other errors are fatal.
*
* Also note that on successful execution ownership of @state is transferred
* from the caller of this function to the function itself. The caller must not
* free or in any other way access @state. If the function fails then the caller
* must clean up @state itself.
*
* Returns:
* 0 on success, negative error code on failure.
*/
int drm_atomic_commit(struct drm_atomic_state *state)
{
struct drm_mode_config *config = &state->dev->mode_config;
int ret;
ret = drm_atomic_check_only(state);
if (ret)
return ret;
DRM_DEBUG_KMS("commiting %p\n", state);
return config->funcs->atomic_commit(state->dev, state, false);
}
EXPORT_SYMBOL(drm_atomic_commit);
/**
* drm_atomic_async_commit - atomic&async configuration commit
* @state: atomic configuration to check
*
* Note that this function can return -EDEADLK if the driver needed to acquire
* more locks but encountered a deadlock. The caller must then do the usual w/w
* backoff dance and restart. All other errors are fatal.
*
* Also note that on successful execution ownership of @state is transferred
* from the caller of this function to the function itself. The caller must not
* free or in any other way access @state. If the function fails then the caller
* must clean up @state itself.
*
* Returns:
* 0 on success, negative error code on failure.
*/
int drm_atomic_async_commit(struct drm_atomic_state *state)
{
struct drm_mode_config *config = &state->dev->mode_config;
int ret;
ret = drm_atomic_check_only(state);
if (ret)
return ret;
DRM_DEBUG_KMS("commiting %p asynchronously\n", state);
return config->funcs->atomic_commit(state->dev, state, true);
}
EXPORT_SYMBOL(drm_atomic_async_commit);

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -34,12 +34,35 @@
#include <linux/moduleparam.h>
#include <drm/drmP.h>
#include <drm/drm_atomic.h>
#include <drm/drm_crtc.h>
#include <drm/drm_fourcc.h>
#include <drm/drm_crtc_helper.h>
#include <drm/drm_fb_helper.h>
#include <drm/drm_plane_helper.h>
#include <drm/drm_atomic_helper.h>
#include <drm/drm_edid.h>
/**
* DOC: overview
*
* The CRTC modeset helper library provides a default set_config implementation
* in drm_crtc_helper_set_config(). Plus a few other convenience functions using
* the same callbacks which drivers can use to e.g. restore the modeset
* configuration on resume with drm_helper_resume_force_mode().
*
* The driver callbacks are mostly compatible with the atomic modeset helpers,
* except for the handling of the primary plane: Atomic helpers require that the
* primary plane is implemented as a real standalone plane and not directly tied
* to the CRTC state. For easier transition this library provides functions to
* implement the old semantics required by the CRTC helpers using the new plane
* and atomic helper callbacks.
*
* Drivers are strongly urged to convert to the atomic helpers (by way of first
* converting to the plane helpers). New drivers must not use these functions
* but need to implement the atomic interface instead, potentially using the
* atomic helpers for that.
*/
MODULE_AUTHOR("David Airlie, Jesse Barnes");
MODULE_DESCRIPTION("DRM KMS helper");
MODULE_LICENSE("GPL and additional rights");
@ -888,3 +911,112 @@ void drm_helper_resume_force_mode(struct drm_device *dev)
drm_modeset_unlock_all(dev);
}
EXPORT_SYMBOL(drm_helper_resume_force_mode);
/**
* drm_helper_crtc_mode_set - mode_set implementation for atomic plane helpers
* @crtc: DRM CRTC
* @mode: DRM display mode which userspace requested
* @adjusted_mode: DRM display mode adjusted by ->mode_fixup callbacks
* @x: x offset of the CRTC scanout area on the underlying framebuffer
* @y: y offset of the CRTC scanout area on the underlying framebuffer
* @old_fb: previous framebuffer
*
* This function implements a callback useable as the ->mode_set callback
* required by the crtc helpers. Besides the atomic plane helper functions for
* the primary plane the driver must also provide the ->mode_set_nofb callback
* to set up the crtc.
*
* This is a transitional helper useful for converting drivers to the atomic
* interfaces.
*/
int drm_helper_crtc_mode_set(struct drm_crtc *crtc, struct drm_display_mode *mode,
struct drm_display_mode *adjusted_mode, int x, int y,
struct drm_framebuffer *old_fb)
{
struct drm_crtc_state *crtc_state;
struct drm_crtc_helper_funcs *crtc_funcs = crtc->helper_private;
int ret;
if (crtc->funcs->atomic_duplicate_state)
crtc_state = crtc->funcs->atomic_duplicate_state(crtc);
else if (crtc->state)
crtc_state = kmemdup(crtc->state, sizeof(*crtc_state),
GFP_KERNEL);
else
crtc_state = kzalloc(sizeof(*crtc_state), GFP_KERNEL);
if (!crtc_state)
return -ENOMEM;
crtc_state->enable = true;
crtc_state->planes_changed = true;
crtc_state->mode_changed = true;
drm_mode_copy(&crtc_state->mode, mode);
drm_mode_copy(&crtc_state->adjusted_mode, adjusted_mode);
if (crtc_funcs->atomic_check) {
ret = crtc_funcs->atomic_check(crtc, crtc_state);
if (ret) {
kfree(crtc_state);
return ret;
}
}
swap(crtc->state, crtc_state);
crtc_funcs->mode_set_nofb(crtc);
if (crtc_state) {
if (crtc->funcs->atomic_destroy_state)
crtc->funcs->atomic_destroy_state(crtc, crtc_state);
else
kfree(crtc_state);
}
return drm_helper_crtc_mode_set_base(crtc, x, y, old_fb);
}
EXPORT_SYMBOL(drm_helper_crtc_mode_set);
/**
* drm_helper_crtc_mode_set_base - mode_set_base implementation for atomic plane helpers
* @crtc: DRM CRTC
* @x: x offset of the CRTC scanout area on the underlying framebuffer
* @y: y offset of the CRTC scanout area on the underlying framebuffer
* @old_fb: previous framebuffer
*
* This function implements a callback useable as the ->mode_set_base used
* required by the crtc helpers. The driver must provide the atomic plane helper
* functions for the primary plane.
*
* This is a transitional helper useful for converting drivers to the atomic
* interfaces.
*/
int drm_helper_crtc_mode_set_base(struct drm_crtc *crtc, int x, int y,
struct drm_framebuffer *old_fb)
{
struct drm_plane_state *plane_state;
struct drm_plane *plane = crtc->primary;
if (plane->funcs->atomic_duplicate_state)
plane_state = plane->funcs->atomic_duplicate_state(plane);
else if (plane->state)
plane_state = drm_atomic_helper_plane_duplicate_state(plane);
else
plane_state = kzalloc(sizeof(*plane_state), GFP_KERNEL);
if (!plane_state)
return -ENOMEM;
plane_state->crtc = crtc;
drm_atomic_set_fb_for_plane(plane_state, crtc->primary->fb);
plane_state->crtc_x = 0;
plane_state->crtc_y = 0;
plane_state->crtc_h = crtc->mode.vdisplay;
plane_state->crtc_w = crtc->mode.hdisplay;
plane_state->src_x = x << 16;
plane_state->src_y = y << 16;
plane_state->src_h = crtc->mode.vdisplay << 16;
plane_state->src_w = crtc->mode.hdisplay << 16;
return drm_plane_helper_commit(plane, plane_state, old_fb);
}
EXPORT_SYMBOL(drm_helper_crtc_mode_set_base);

Просмотреть файл

@ -39,198 +39,6 @@
* blocks, ...
*/
/* Run a single AUX_CH I2C transaction, writing/reading data as necessary */
static int
i2c_algo_dp_aux_transaction(struct i2c_adapter *adapter, int mode,
uint8_t write_byte, uint8_t *read_byte)
{
struct i2c_algo_dp_aux_data *algo_data = adapter->algo_data;
int ret;
ret = (*algo_data->aux_ch)(adapter, mode,
write_byte, read_byte);
return ret;
}
/*
* I2C over AUX CH
*/
/*
* Send the address. If the I2C link is running, this 'restarts'
* the connection with the new address, this is used for doing
* a write followed by a read (as needed for DDC)
*/
static int
i2c_algo_dp_aux_address(struct i2c_adapter *adapter, u16 address, bool reading)
{
struct i2c_algo_dp_aux_data *algo_data = adapter->algo_data;
int mode = MODE_I2C_START;
int ret;
if (reading)
mode |= MODE_I2C_READ;
else
mode |= MODE_I2C_WRITE;
algo_data->address = address;
algo_data->running = true;
ret = i2c_algo_dp_aux_transaction(adapter, mode, 0, NULL);
return ret;
}
/*
* Stop the I2C transaction. This closes out the link, sending
* a bare address packet with the MOT bit turned off
*/
static void
i2c_algo_dp_aux_stop(struct i2c_adapter *adapter, bool reading)
{
struct i2c_algo_dp_aux_data *algo_data = adapter->algo_data;
int mode = MODE_I2C_STOP;
if (reading)
mode |= MODE_I2C_READ;
else
mode |= MODE_I2C_WRITE;
if (algo_data->running) {
(void) i2c_algo_dp_aux_transaction(adapter, mode, 0, NULL);
algo_data->running = false;
}
}
/*
* Write a single byte to the current I2C address, the
* the I2C link must be running or this returns -EIO
*/
static int
i2c_algo_dp_aux_put_byte(struct i2c_adapter *adapter, u8 byte)
{
struct i2c_algo_dp_aux_data *algo_data = adapter->algo_data;
int ret;
if (!algo_data->running)
return -EIO;
ret = i2c_algo_dp_aux_transaction(adapter, MODE_I2C_WRITE, byte, NULL);
return ret;
}
/*
* Read a single byte from the current I2C address, the
* I2C link must be running or this returns -EIO
*/
static int
i2c_algo_dp_aux_get_byte(struct i2c_adapter *adapter, u8 *byte_ret)
{
struct i2c_algo_dp_aux_data *algo_data = adapter->algo_data;
int ret;
if (!algo_data->running)
return -EIO;
ret = i2c_algo_dp_aux_transaction(adapter, MODE_I2C_READ, 0, byte_ret);
return ret;
}
static int
i2c_algo_dp_aux_xfer(struct i2c_adapter *adapter,
struct i2c_msg *msgs,
int num)
{
int ret = 0;
bool reading = false;
int m;
int b;
for (m = 0; m < num; m++) {
u16 len = msgs[m].len;
u8 *buf = msgs[m].buf;
reading = (msgs[m].flags & I2C_M_RD) != 0;
ret = i2c_algo_dp_aux_address(adapter, msgs[m].addr, reading);
if (ret < 0)
break;
if (reading) {
for (b = 0; b < len; b++) {
ret = i2c_algo_dp_aux_get_byte(adapter, &buf[b]);
if (ret < 0)
break;
}
} else {
for (b = 0; b < len; b++) {
ret = i2c_algo_dp_aux_put_byte(adapter, buf[b]);
if (ret < 0)
break;
}
}
if (ret < 0)
break;
}
if (ret >= 0)
ret = num;
i2c_algo_dp_aux_stop(adapter, reading);
DRM_DEBUG_KMS("dp_aux_xfer return %d\n", ret);
return ret;
}
static u32
i2c_algo_dp_aux_functionality(struct i2c_adapter *adapter)
{
return I2C_FUNC_I2C | I2C_FUNC_SMBUS_EMUL |
I2C_FUNC_SMBUS_READ_BLOCK_DATA |
I2C_FUNC_SMBUS_BLOCK_PROC_CALL |
I2C_FUNC_10BIT_ADDR;
}
static const struct i2c_algorithm i2c_dp_aux_algo = {
.master_xfer = i2c_algo_dp_aux_xfer,
.functionality = i2c_algo_dp_aux_functionality,
};
static void
i2c_dp_aux_reset_bus(struct i2c_adapter *adapter)
{
(void) i2c_algo_dp_aux_address(adapter, 0, false);
(void) i2c_algo_dp_aux_stop(adapter, false);
}
static int
i2c_dp_aux_prepare_bus(struct i2c_adapter *adapter)
{
adapter->algo = &i2c_dp_aux_algo;
adapter->retries = 3;
i2c_dp_aux_reset_bus(adapter);
return 0;
}
/**
* i2c_dp_aux_add_bus() - register an i2c adapter using the aux ch helper
* @adapter: i2c adapter to register
*
* This registers an i2c adapter that uses dp aux channel as it's underlaying
* transport. The driver needs to fill out the &i2c_algo_dp_aux_data structure
* and store it in the algo_data member of the @adapter argument. This will be
* used by the i2c over dp aux algorithm to drive the hardware.
*
* RETURNS:
* 0 on success, -ERRNO on failure.
*
* IMPORTANT:
* This interface is deprecated, please switch to the new dp aux helpers and
* drm_dp_aux_register().
*/
int
i2c_dp_aux_add_bus(struct i2c_adapter *adapter)
{
int error;
error = i2c_dp_aux_prepare_bus(adapter);
if (error)
return error;
error = i2c_add_adapter(adapter);
return error;
}
EXPORT_SYMBOL(i2c_dp_aux_add_bus);
/* Helpers for DP link training */
static u8 dp_link_status(const u8 link_status[DP_LINK_STATUS_SIZE], int r)
{
@ -378,10 +186,11 @@ static int drm_dp_dpcd_access(struct drm_dp_aux *aux, u8 request,
/*
* The specification doesn't give any recommendation on how often to
* retry native transactions, so retry 7 times like for I2C-over-AUX
* transactions.
* retry native transactions. We used to retry 7 times like for
* aux i2c transactions but real world devices this wasn't
* sufficient, bump to 32 which makes Dell 4k monitors happier.
*/
for (retry = 0; retry < 7; retry++) {
for (retry = 0; retry < 32; retry++) {
mutex_lock(&aux->hw_mutex);
err = aux->transfer(aux, &msg);
@ -654,10 +463,12 @@ static int drm_dp_i2c_do_msg(struct drm_dp_aux *aux, struct drm_dp_aux_msg *msg)
case DP_AUX_I2C_REPLY_NACK:
DRM_DEBUG_KMS("I2C nack\n");
aux->i2c_nack_count++;
return -EREMOTEIO;
case DP_AUX_I2C_REPLY_DEFER:
DRM_DEBUG_KMS("I2C defer\n");
aux->i2c_defer_count++;
usleep_range(400, 500);
continue;

Просмотреть файл

@ -839,6 +839,8 @@ static void drm_dp_put_mst_branch_device(struct drm_dp_mst_branch *mstb)
static void drm_dp_port_teardown_pdt(struct drm_dp_mst_port *port, int old_pdt)
{
struct drm_dp_mst_branch *mstb;
switch (old_pdt) {
case DP_PEER_DEVICE_DP_LEGACY_CONV:
case DP_PEER_DEVICE_SST_SINK:
@ -846,8 +848,9 @@ static void drm_dp_port_teardown_pdt(struct drm_dp_mst_port *port, int old_pdt)
drm_dp_mst_unregister_i2c_bus(&port->aux);
break;
case DP_PEER_DEVICE_MST_BRANCHING:
drm_dp_put_mst_branch_device(port->mstb);
mstb = port->mstb;
port->mstb = NULL;
drm_dp_put_mst_branch_device(mstb);
break;
}
}
@ -858,6 +861,8 @@ static void drm_dp_destroy_port(struct kref *kref)
struct drm_dp_mst_topology_mgr *mgr = port->mgr;
if (!port->input) {
port->vcpi.num_slots = 0;
kfree(port->cached_edid);
if (port->connector)
(*port->mgr->cbs->destroy_connector)(mgr, port->connector);
drm_dp_port_teardown_pdt(port, port->pdt);
@ -1011,19 +1016,20 @@ static void drm_dp_check_port_guid(struct drm_dp_mst_branch *mstb,
static void build_mst_prop_path(struct drm_dp_mst_port *port,
struct drm_dp_mst_branch *mstb,
char *proppath)
char *proppath,
size_t proppath_size)
{
int i;
char temp[8];
snprintf(proppath, 255, "mst:%d", mstb->mgr->conn_base_id);
snprintf(proppath, proppath_size, "mst:%d", mstb->mgr->conn_base_id);
for (i = 0; i < (mstb->lct - 1); i++) {
int shift = (i % 2) ? 0 : 4;
int port_num = mstb->rad[i / 2] >> shift;
snprintf(temp, 8, "-%d", port_num);
strncat(proppath, temp, 255);
snprintf(temp, sizeof(temp), "-%d", port_num);
strlcat(proppath, temp, proppath_size);
}
snprintf(temp, 8, "-%d", port->port_num);
strncat(proppath, temp, 255);
snprintf(temp, sizeof(temp), "-%d", port->port_num);
strlcat(proppath, temp, proppath_size);
}
static void drm_dp_add_port(struct drm_dp_mst_branch *mstb,
@ -1094,8 +1100,12 @@ static void drm_dp_add_port(struct drm_dp_mst_branch *mstb,
if (created && !port->input) {
char proppath[255];
build_mst_prop_path(port, mstb, proppath);
build_mst_prop_path(port, mstb, proppath, sizeof(proppath));
port->connector = (*mstb->mgr->cbs->add_connector)(mstb->mgr, port, proppath);
if (port->port_num >= 8) {
port->cached_edid = drm_get_edid(port->connector, &port->aux.ddc);
}
}
/* put reference to this port */
@ -1798,17 +1808,27 @@ static int drm_dp_send_up_ack_reply(struct drm_dp_mst_topology_mgr *mgr,
return 0;
}
static int drm_dp_get_vc_payload_bw(int dp_link_bw, int dp_link_count)
static bool drm_dp_get_vc_payload_bw(int dp_link_bw,
int dp_link_count,
int *out)
{
switch (dp_link_bw) {
default:
DRM_DEBUG_KMS("invalid link bandwidth in DPCD: %x (link count: %d)\n",
dp_link_bw, dp_link_count);
return false;
case DP_LINK_BW_1_62:
return 3 * dp_link_count;
*out = 3 * dp_link_count;
break;
case DP_LINK_BW_2_7:
return 5 * dp_link_count;
*out = 5 * dp_link_count;
break;
case DP_LINK_BW_5_4:
return 10 * dp_link_count;
*out = 10 * dp_link_count;
break;
}
BUG();
return true;
}
/**
@ -1840,7 +1860,13 @@ int drm_dp_mst_topology_mgr_set_mst(struct drm_dp_mst_topology_mgr *mgr, bool ms
goto out_unlock;
}
mgr->pbn_div = drm_dp_get_vc_payload_bw(mgr->dpcd[1], mgr->dpcd[2] & DP_MAX_LANE_COUNT_MASK);
if (!drm_dp_get_vc_payload_bw(mgr->dpcd[1],
mgr->dpcd[2] & DP_MAX_LANE_COUNT_MASK,
&mgr->pbn_div)) {
ret = -EINVAL;
goto out_unlock;
}
mgr->total_pbn = 2560;
mgr->total_slots = DIV_ROUND_UP(mgr->total_pbn, mgr->pbn_div);
mgr->avail_slots = mgr->total_slots;
@ -2150,7 +2176,8 @@ EXPORT_SYMBOL(drm_dp_mst_hpd_irq);
* This returns the current connection state for a port. It validates the
* port pointer still exists so the caller doesn't require a reference
*/
enum drm_connector_status drm_dp_mst_detect_port(struct drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port *port)
enum drm_connector_status drm_dp_mst_detect_port(struct drm_connector *connector,
struct drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port *port)
{
enum drm_connector_status status = connector_status_disconnected;
@ -2169,6 +2196,10 @@ enum drm_connector_status drm_dp_mst_detect_port(struct drm_dp_mst_topology_mgr
case DP_PEER_DEVICE_SST_SINK:
status = connector_status_connected;
/* for logical ports - cache the EDID */
if (port->port_num >= 8 && !port->cached_edid) {
port->cached_edid = drm_get_edid(connector, &port->aux.ddc);
}
break;
case DP_PEER_DEVICE_DP_LEGACY_CONV:
if (port->ldps)
@ -2200,7 +2231,12 @@ struct edid *drm_dp_mst_get_edid(struct drm_connector *connector, struct drm_dp_
if (!port)
return NULL;
edid = drm_get_edid(connector, &port->aux.ddc);
if (port->cached_edid)
edid = drm_edid_duplicate(port->cached_edid);
else
edid = drm_get_edid(connector, &port->aux.ddc);
drm_mode_connector_set_tile_property(connector);
drm_dp_put_port(port);
return edid;
}

Просмотреть файл

@ -56,7 +56,7 @@ static struct idr drm_minors_idr;
struct class *drm_class;
static struct dentry *drm_debugfs_root;
void drm_err(const char *func, const char *format, ...)
void drm_err(const char *format, ...)
{
struct va_format vaf;
va_list args;
@ -66,7 +66,8 @@ void drm_err(const char *func, const char *format, ...)
vaf.fmt = format;
vaf.va = &args;
printk(KERN_ERR "[" DRM_NAME ":%s] *ERROR* %pV", func, &vaf);
printk(KERN_ERR "[" DRM_NAME ":%pf] *ERROR* %pV",
__builtin_return_address(0), &vaf);
va_end(args);
}
@ -534,6 +535,8 @@ static void drm_fs_inode_free(struct inode *inode)
* The initial ref-count of the object is 1. Use drm_dev_ref() and
* drm_dev_unref() to take and drop further ref-counts.
*
* Note that for purely virtual devices @parent can be NULL.
*
* RETURNS:
* Pointer to new DRM device, or NULL if out of memory.
*/

Просмотреть файл

@ -34,6 +34,7 @@
#include <linux/module.h>
#include <drm/drmP.h>
#include <drm/drm_edid.h>
#include <drm/drm_displayid.h>
#define version_greater(edid, maj, min) \
(((edid)->version > (maj)) || \
@ -1014,6 +1015,27 @@ module_param_named(edid_fixup, edid_fixup, int, 0400);
MODULE_PARM_DESC(edid_fixup,
"Minimum number of valid EDID header bytes (0-8, default 6)");
static void drm_get_displayid(struct drm_connector *connector,
struct edid *edid);
static int drm_edid_block_checksum(const u8 *raw_edid)
{
int i;
u8 csum = 0;
for (i = 0; i < EDID_LENGTH; i++)
csum += raw_edid[i];
return csum;
}
static bool drm_edid_is_zero(const u8 *in_edid, int length)
{
if (memchr_inv(in_edid, 0, length))
return false;
return true;
}
/**
* drm_edid_block_valid - Sanity check the EDID block (base or extension)
* @raw_edid: pointer to raw EDID block
@ -1027,8 +1049,7 @@ MODULE_PARM_DESC(edid_fixup,
*/
bool drm_edid_block_valid(u8 *raw_edid, int block, bool print_bad_edid)
{
int i;
u8 csum = 0;
u8 csum;
struct edid *edid = (struct edid *)raw_edid;
if (WARN_ON(!raw_edid))
@ -1048,8 +1069,7 @@ bool drm_edid_block_valid(u8 *raw_edid, int block, bool print_bad_edid)
}
}
for (i = 0; i < EDID_LENGTH; i++)
csum += raw_edid[i];
csum = drm_edid_block_checksum(raw_edid);
if (csum) {
if (print_bad_edid) {
DRM_ERROR("EDID checksum is invalid, remainder is %d\n", csum);
@ -1080,9 +1100,13 @@ bool drm_edid_block_valid(u8 *raw_edid, int block, bool print_bad_edid)
bad:
if (print_bad_edid) {
printk(KERN_ERR "Raw EDID:\n");
print_hex_dump(KERN_ERR, " \t", DUMP_PREFIX_NONE, 16, 1,
if (drm_edid_is_zero(raw_edid, EDID_LENGTH)) {
printk(KERN_ERR "EDID block is all zeroes\n");
} else {
printk(KERN_ERR "Raw EDID:\n");
print_hex_dump(KERN_ERR, " \t", DUMP_PREFIX_NONE, 16, 1,
raw_edid, EDID_LENGTH, false);
}
}
return false;
}
@ -1115,7 +1139,7 @@ EXPORT_SYMBOL(drm_edid_is_valid);
#define DDC_SEGMENT_ADDR 0x30
/**
* drm_do_probe_ddc_edid() - get EDID information via I2C
* @adapter: I2C device adaptor
* @data: I2C device adapter
* @buf: EDID data buffer to be filled
* @block: 128 byte EDID block to start fetching from
* @len: EDID data buffer length to fetch
@ -1125,9 +1149,9 @@ EXPORT_SYMBOL(drm_edid_is_valid);
* Return: 0 on success or -1 on failure.
*/
static int
drm_do_probe_ddc_edid(struct i2c_adapter *adapter, unsigned char *buf,
int block, int len)
drm_do_probe_ddc_edid(void *data, u8 *buf, unsigned int block, size_t len)
{
struct i2c_adapter *adapter = data;
unsigned char start = block * EDID_LENGTH;
unsigned char segment = block >> 1;
unsigned char xfers = segment ? 3 : 2;
@ -1176,16 +1200,26 @@ drm_do_probe_ddc_edid(struct i2c_adapter *adapter, unsigned char *buf,
return ret == xfers ? 0 : -1;
}
static bool drm_edid_is_zero(u8 *in_edid, int length)
{
if (memchr_inv(in_edid, 0, length))
return false;
return true;
}
static u8 *
drm_do_get_edid(struct drm_connector *connector, struct i2c_adapter *adapter)
/**
* drm_do_get_edid - get EDID data using a custom EDID block read function
* @connector: connector we're probing
* @get_edid_block: EDID block read function
* @data: private data passed to the block read function
*
* When the I2C adapter connected to the DDC bus is hidden behind a device that
* exposes a different interface to read EDID blocks this function can be used
* to get EDID data using a custom block read function.
*
* As in the general case the DDC bus is accessible by the kernel at the I2C
* level, drivers must make all reasonable efforts to expose it as an I2C
* adapter and use drm_get_edid() instead of abusing this function.
*
* Return: Pointer to valid EDID or NULL if we couldn't find any.
*/
struct edid *drm_do_get_edid(struct drm_connector *connector,
int (*get_edid_block)(void *data, u8 *buf, unsigned int block,
size_t len),
void *data)
{
int i, j = 0, valid_extensions = 0;
u8 *block, *new;
@ -1196,7 +1230,7 @@ drm_do_get_edid(struct drm_connector *connector, struct i2c_adapter *adapter)
/* base block fetch */
for (i = 0; i < 4; i++) {
if (drm_do_probe_ddc_edid(adapter, block, 0, EDID_LENGTH))
if (get_edid_block(data, block, 0, EDID_LENGTH))
goto out;
if (drm_edid_block_valid(block, 0, print_bad_edid))
break;
@ -1210,7 +1244,7 @@ drm_do_get_edid(struct drm_connector *connector, struct i2c_adapter *adapter)
/* if there's no extensions, we're done */
if (block[0x7e] == 0)
return block;
return (struct edid *)block;
new = krealloc(block, (block[0x7e] + 1) * EDID_LENGTH, GFP_KERNEL);
if (!new)
@ -1219,7 +1253,7 @@ drm_do_get_edid(struct drm_connector *connector, struct i2c_adapter *adapter)
for (j = 1; j <= block[0x7e]; j++) {
for (i = 0; i < 4; i++) {
if (drm_do_probe_ddc_edid(adapter,
if (get_edid_block(data,
block + (valid_extensions + 1) * EDID_LENGTH,
j, EDID_LENGTH))
goto out;
@ -1247,7 +1281,7 @@ drm_do_get_edid(struct drm_connector *connector, struct i2c_adapter *adapter)
block = new;
}
return block;
return (struct edid *)block;
carp:
if (print_bad_edid) {
@ -1260,6 +1294,7 @@ out:
kfree(block);
return NULL;
}
EXPORT_SYMBOL_GPL(drm_do_get_edid);
/**
* drm_probe_ddc() - probe DDC presence
@ -1289,11 +1324,14 @@ EXPORT_SYMBOL(drm_probe_ddc);
struct edid *drm_get_edid(struct drm_connector *connector,
struct i2c_adapter *adapter)
{
struct edid *edid = NULL;
struct edid *edid;
if (drm_probe_ddc(adapter))
edid = (struct edid *)drm_do_get_edid(connector, adapter);
if (!drm_probe_ddc(adapter))
return NULL;
edid = drm_do_get_edid(connector, drm_do_probe_ddc_edid, adapter);
if (edid)
drm_get_displayid(connector, edid);
return edid;
}
EXPORT_SYMBOL(drm_get_edid);
@ -2389,7 +2427,7 @@ add_detailed_modes(struct drm_connector *connector, struct edid *edid,
/*
* Search EDID for CEA extension block.
*/
static u8 *drm_find_cea_extension(struct edid *edid)
static u8 *drm_find_edid_extension(struct edid *edid, int ext_id)
{
u8 *edid_ext = NULL;
int i;
@ -2401,7 +2439,7 @@ static u8 *drm_find_cea_extension(struct edid *edid)
/* Find CEA extension */
for (i = 0; i < edid->extensions; i++) {
edid_ext = (u8 *)edid + EDID_LENGTH * (i + 1);
if (edid_ext[0] == CEA_EXT)
if (edid_ext[0] == ext_id)
break;
}
@ -2411,6 +2449,16 @@ static u8 *drm_find_cea_extension(struct edid *edid)
return edid_ext;
}
static u8 *drm_find_cea_extension(struct edid *edid)
{
return drm_find_edid_extension(edid, CEA_EXT);
}
static u8 *drm_find_displayid_extension(struct edid *edid)
{
return drm_find_edid_extension(edid, DISPLAYID_EXT);
}
/*
* Calculate the alternate clock for the CEA mode
* (60Hz vs. 59.94Hz etc.)
@ -3128,9 +3176,12 @@ void drm_edid_to_eld(struct drm_connector *connector, struct edid *edid)
}
}
eld[5] |= sad_count << 4;
eld[2] = (20 + mnl + sad_count * 3 + 3) / 4;
DRM_DEBUG_KMS("ELD size %d, SAD count %d\n", (int)eld[2], sad_count);
eld[DRM_ELD_BASELINE_ELD_LEN] =
DIV_ROUND_UP(drm_eld_calc_baseline_block_size(eld), 4);
DRM_DEBUG_KMS("ELD size %d, SAD count %d\n",
drm_eld_size(eld), sad_count);
}
EXPORT_SYMBOL(drm_edid_to_eld);
@ -3868,3 +3919,123 @@ drm_hdmi_vendor_infoframe_from_display_mode(struct hdmi_vendor_infoframe *frame,
return 0;
}
EXPORT_SYMBOL(drm_hdmi_vendor_infoframe_from_display_mode);
static int drm_parse_display_id(struct drm_connector *connector,
u8 *displayid, int length,
bool is_edid_extension)
{
/* if this is an EDID extension the first byte will be 0x70 */
int idx = 0;
struct displayid_hdr *base;
struct displayid_block *block;
u8 csum = 0;
int i;
if (is_edid_extension)
idx = 1;
base = (struct displayid_hdr *)&displayid[idx];
DRM_DEBUG_KMS("base revision 0x%x, length %d, %d %d\n",
base->rev, base->bytes, base->prod_id, base->ext_count);
if (base->bytes + 5 > length - idx)
return -EINVAL;
for (i = idx; i <= base->bytes + 5; i++) {
csum += displayid[i];
}
if (csum) {
DRM_ERROR("DisplayID checksum invalid, remainder is %d\n", csum);
return -EINVAL;
}
block = (struct displayid_block *)&displayid[idx + 4];
DRM_DEBUG_KMS("block id %d, rev %d, len %d\n",
block->tag, block->rev, block->num_bytes);
switch (block->tag) {
case DATA_BLOCK_TILED_DISPLAY: {
struct displayid_tiled_block *tile = (struct displayid_tiled_block *)block;
u16 w, h;
u8 tile_v_loc, tile_h_loc;
u8 num_v_tile, num_h_tile;
struct drm_tile_group *tg;
w = tile->tile_size[0] | tile->tile_size[1] << 8;
h = tile->tile_size[2] | tile->tile_size[3] << 8;
num_v_tile = (tile->topo[0] & 0xf) | (tile->topo[2] & 0x30);
num_h_tile = (tile->topo[0] >> 4) | ((tile->topo[2] >> 2) & 0x30);
tile_v_loc = (tile->topo[1] & 0xf) | ((tile->topo[2] & 0x3) << 4);
tile_h_loc = (tile->topo[1] >> 4) | (((tile->topo[2] >> 2) & 0x3) << 4);
connector->has_tile = true;
if (tile->tile_cap & 0x80)
connector->tile_is_single_monitor = true;
connector->num_h_tile = num_h_tile + 1;
connector->num_v_tile = num_v_tile + 1;
connector->tile_h_loc = tile_h_loc;
connector->tile_v_loc = tile_v_loc;
connector->tile_h_size = w + 1;
connector->tile_v_size = h + 1;
DRM_DEBUG_KMS("tile cap 0x%x\n", tile->tile_cap);
DRM_DEBUG_KMS("tile_size %d x %d\n", w + 1, h + 1);
DRM_DEBUG_KMS("topo num tiles %dx%d, location %dx%d\n",
num_h_tile + 1, num_v_tile + 1, tile_h_loc, tile_v_loc);
DRM_DEBUG_KMS("vend %c%c%c\n", tile->topology_id[0], tile->topology_id[1], tile->topology_id[2]);
tg = drm_mode_get_tile_group(connector->dev, tile->topology_id);
if (!tg) {
tg = drm_mode_create_tile_group(connector->dev, tile->topology_id);
}
if (!tg)
return -ENOMEM;
if (connector->tile_group != tg) {
/* if we haven't got a pointer,
take the reference, drop ref to old tile group */
if (connector->tile_group) {
drm_mode_put_tile_group(connector->dev, connector->tile_group);
}
connector->tile_group = tg;
} else
/* if same tile group, then release the ref we just took. */
drm_mode_put_tile_group(connector->dev, tg);
}
break;
default:
printk("unknown displayid tag %d\n", block->tag);
break;
}
return 0;
}
static void drm_get_displayid(struct drm_connector *connector,
struct edid *edid)
{
void *displayid = NULL;
int ret;
connector->has_tile = false;
displayid = drm_find_displayid_extension(edid);
if (!displayid) {
/* drop reference to any tile group we had */
goto out_drop_ref;
}
ret = drm_parse_display_id(connector, displayid, EDID_LENGTH, true);
if (ret < 0)
goto out_drop_ref;
if (!connector->has_tile)
goto out_drop_ref;
return;
out_drop_ref:
if (connector->tile_group) {
drm_mode_put_tile_group(connector->dev, connector->tile_group);
connector->tile_group = NULL;
}
return;
}

Просмотреть файл

@ -254,8 +254,7 @@ static void *edid_load(struct drm_connector *connector, const char *name,
name, connector_name);
out:
if (fw)
release_firmware(fw);
release_firmware(fw);
return edid;
}

Просмотреть файл

@ -347,9 +347,18 @@ bool drm_fb_helper_restore_fbdev_mode_unlocked(struct drm_fb_helper *fb_helper)
{
struct drm_device *dev = fb_helper->dev;
bool ret;
bool do_delayed = false;
drm_modeset_lock_all(dev);
ret = restore_fbdev_mode(fb_helper);
do_delayed = fb_helper->delayed_hotplug;
if (do_delayed)
fb_helper->delayed_hotplug = false;
drm_modeset_unlock_all(dev);
if (do_delayed)
drm_fb_helper_hotplug_event(fb_helper);
return ret;
}
EXPORT_SYMBOL(drm_fb_helper_restore_fbdev_mode_unlocked);
@ -888,10 +897,6 @@ int drm_fb_helper_set_par(struct fb_info *info)
drm_fb_helper_restore_fbdev_mode_unlocked(fb_helper);
if (fb_helper->delayed_hotplug) {
fb_helper->delayed_hotplug = false;
drm_fb_helper_hotplug_event(fb_helper);
}
return 0;
}
EXPORT_SYMBOL(drm_fb_helper_set_par);
@ -995,19 +1000,21 @@ static int drm_fb_helper_single_fb_probe(struct drm_fb_helper *fb_helper,
crtc_count = 0;
for (i = 0; i < fb_helper->crtc_count; i++) {
struct drm_display_mode *desired_mode;
int x, y;
desired_mode = fb_helper->crtc_info[i].desired_mode;
x = fb_helper->crtc_info[i].x;
y = fb_helper->crtc_info[i].y;
if (desired_mode) {
if (gamma_size == 0)
gamma_size = fb_helper->crtc_info[i].mode_set.crtc->gamma_size;
if (desired_mode->hdisplay < sizes.fb_width)
sizes.fb_width = desired_mode->hdisplay;
if (desired_mode->vdisplay < sizes.fb_height)
sizes.fb_height = desired_mode->vdisplay;
if (desired_mode->hdisplay > sizes.surface_width)
sizes.surface_width = desired_mode->hdisplay;
if (desired_mode->vdisplay > sizes.surface_height)
sizes.surface_height = desired_mode->vdisplay;
if (desired_mode->hdisplay + x < sizes.fb_width)
sizes.fb_width = desired_mode->hdisplay + x;
if (desired_mode->vdisplay + y < sizes.fb_height)
sizes.fb_height = desired_mode->vdisplay + y;
if (desired_mode->hdisplay + x > sizes.surface_width)
sizes.surface_width = desired_mode->hdisplay + x;
if (desired_mode->vdisplay + y > sizes.surface_height)
sizes.surface_height = desired_mode->vdisplay + y;
crtc_count++;
}
}
@ -1307,6 +1314,7 @@ static void drm_enable_connectors(struct drm_fb_helper *fb_helper,
static bool drm_target_cloned(struct drm_fb_helper *fb_helper,
struct drm_display_mode **modes,
struct drm_fb_offset *offsets,
bool *enabled, int width, int height)
{
int count, i, j;
@ -1378,27 +1386,88 @@ static bool drm_target_cloned(struct drm_fb_helper *fb_helper,
return false;
}
static int drm_get_tile_offsets(struct drm_fb_helper *fb_helper,
struct drm_display_mode **modes,
struct drm_fb_offset *offsets,
int idx,
int h_idx, int v_idx)
{
struct drm_fb_helper_connector *fb_helper_conn;
int i;
int hoffset = 0, voffset = 0;
for (i = 0; i < fb_helper->connector_count; i++) {
fb_helper_conn = fb_helper->connector_info[i];
if (!fb_helper_conn->connector->has_tile)
continue;
if (!modes[i] && (h_idx || v_idx)) {
DRM_DEBUG_KMS("no modes for connector tiled %d %d\n", i,
fb_helper_conn->connector->base.id);
continue;
}
if (fb_helper_conn->connector->tile_h_loc < h_idx)
hoffset += modes[i]->hdisplay;
if (fb_helper_conn->connector->tile_v_loc < v_idx)
voffset += modes[i]->vdisplay;
}
offsets[idx].x = hoffset;
offsets[idx].y = voffset;
DRM_DEBUG_KMS("returned %d %d for %d %d\n", hoffset, voffset, h_idx, v_idx);
return 0;
}
static bool drm_target_preferred(struct drm_fb_helper *fb_helper,
struct drm_display_mode **modes,
struct drm_fb_offset *offsets,
bool *enabled, int width, int height)
{
struct drm_fb_helper_connector *fb_helper_conn;
int i;
uint64_t conn_configured = 0, mask;
int tile_pass = 0;
mask = (1 << fb_helper->connector_count) - 1;
retry:
for (i = 0; i < fb_helper->connector_count; i++) {
fb_helper_conn = fb_helper->connector_info[i];
if (enabled[i] == false)
if (conn_configured & (1 << i))
continue;
if (enabled[i] == false) {
conn_configured |= (1 << i);
continue;
}
/* first pass over all the untiled connectors */
if (tile_pass == 0 && fb_helper_conn->connector->has_tile)
continue;
if (tile_pass == 1) {
if (fb_helper_conn->connector->tile_h_loc != 0 ||
fb_helper_conn->connector->tile_v_loc != 0)
continue;
} else {
if (fb_helper_conn->connector->tile_h_loc != tile_pass -1 &&
fb_helper_conn->connector->tile_v_loc != tile_pass - 1)
/* if this tile_pass doesn't cover any of the tiles - keep going */
continue;
/* find the tile offsets for this pass - need
to find all tiles left and above */
drm_get_tile_offsets(fb_helper, modes, offsets,
i, fb_helper_conn->connector->tile_h_loc, fb_helper_conn->connector->tile_v_loc);
}
DRM_DEBUG_KMS("looking for cmdline mode on connector %d\n",
fb_helper_conn->connector->base.id);
/* got for command line mode first */
modes[i] = drm_pick_cmdline_mode(fb_helper_conn, width, height);
if (!modes[i]) {
DRM_DEBUG_KMS("looking for preferred mode on connector %d\n",
fb_helper_conn->connector->base.id);
DRM_DEBUG_KMS("looking for preferred mode on connector %d %d\n",
fb_helper_conn->connector->base.id, fb_helper_conn->connector->tile_group ? fb_helper_conn->connector->tile_group->id : 0);
modes[i] = drm_has_preferred_mode(fb_helper_conn, width, height);
}
/* No preferred modes, pick one off the list */
@ -1408,6 +1477,12 @@ static bool drm_target_preferred(struct drm_fb_helper *fb_helper,
}
DRM_DEBUG_KMS("found mode %s\n", modes[i] ? modes[i]->name :
"none");
conn_configured |= (1 << i);
}
if ((conn_configured & mask) != mask) {
tile_pass++;
goto retry;
}
return true;
}
@ -1497,6 +1572,7 @@ static void drm_setup_crtcs(struct drm_fb_helper *fb_helper)
struct drm_device *dev = fb_helper->dev;
struct drm_fb_helper_crtc **crtcs;
struct drm_display_mode **modes;
struct drm_fb_offset *offsets;
struct drm_mode_set *modeset;
bool *enabled;
int width, height;
@ -1511,9 +1587,11 @@ static void drm_setup_crtcs(struct drm_fb_helper *fb_helper)
sizeof(struct drm_fb_helper_crtc *), GFP_KERNEL);
modes = kcalloc(dev->mode_config.num_connector,
sizeof(struct drm_display_mode *), GFP_KERNEL);
offsets = kcalloc(dev->mode_config.num_connector,
sizeof(struct drm_fb_offset), GFP_KERNEL);
enabled = kcalloc(dev->mode_config.num_connector,
sizeof(bool), GFP_KERNEL);
if (!crtcs || !modes || !enabled) {
if (!crtcs || !modes || !enabled || !offsets) {
DRM_ERROR("Memory allocation failed\n");
goto out;
}
@ -1523,14 +1601,16 @@ static void drm_setup_crtcs(struct drm_fb_helper *fb_helper)
if (!(fb_helper->funcs->initial_config &&
fb_helper->funcs->initial_config(fb_helper, crtcs, modes,
offsets,
enabled, width, height))) {
memset(modes, 0, dev->mode_config.num_connector*sizeof(modes[0]));
memset(crtcs, 0, dev->mode_config.num_connector*sizeof(crtcs[0]));
memset(offsets, 0, dev->mode_config.num_connector*sizeof(offsets[0]));
if (!drm_target_cloned(fb_helper,
modes, enabled, width, height) &&
!drm_target_preferred(fb_helper,
modes, enabled, width, height))
if (!drm_target_cloned(fb_helper, modes, offsets,
enabled, width, height) &&
!drm_target_preferred(fb_helper, modes, offsets,
enabled, width, height))
DRM_ERROR("Unable to find initial modes\n");
DRM_DEBUG_KMS("picking CRTCs for %dx%d config\n",
@ -1550,18 +1630,23 @@ static void drm_setup_crtcs(struct drm_fb_helper *fb_helper)
for (i = 0; i < fb_helper->connector_count; i++) {
struct drm_display_mode *mode = modes[i];
struct drm_fb_helper_crtc *fb_crtc = crtcs[i];
struct drm_fb_offset *offset = &offsets[i];
modeset = &fb_crtc->mode_set;
if (mode && fb_crtc) {
DRM_DEBUG_KMS("desired mode %s set on crtc %d\n",
mode->name, fb_crtc->mode_set.crtc->base.id);
DRM_DEBUG_KMS("desired mode %s set on crtc %d (%d,%d)\n",
mode->name, fb_crtc->mode_set.crtc->base.id, offset->x, offset->y);
fb_crtc->desired_mode = mode;
fb_crtc->x = offset->x;
fb_crtc->y = offset->y;
if (modeset->mode)
drm_mode_destroy(dev, modeset->mode);
modeset->mode = drm_mode_duplicate(dev,
fb_crtc->desired_mode);
modeset->connectors[modeset->num_connectors++] = fb_helper->connector_info[i]->connector;
modeset->fb = fb_helper->fb;
modeset->x = offset->x;
modeset->y = offset->y;
}
}
@ -1570,7 +1655,6 @@ static void drm_setup_crtcs(struct drm_fb_helper *fb_helper)
modeset = &fb_helper->crtc_info[i].mode_set;
if (modeset->num_connectors == 0) {
BUG_ON(modeset->fb);
BUG_ON(modeset->num_connectors);
if (modeset->mode)
drm_mode_destroy(dev, modeset->mode);
modeset->mode = NULL;
@ -1579,6 +1663,7 @@ static void drm_setup_crtcs(struct drm_fb_helper *fb_helper)
out:
kfree(crtcs);
kfree(modes);
kfree(offsets);
kfree(enabled);
}

Просмотреть файл

@ -24,6 +24,44 @@
#include "drmP.h"
#include "drm_flip_work.h"
/**
* drm_flip_work_allocate_task - allocate a flip-work task
* @data: data associated to the task
* @flags: allocator flags
*
* Allocate a drm_flip_task object and attach private data to it.
*/
struct drm_flip_task *drm_flip_work_allocate_task(void *data, gfp_t flags)
{
struct drm_flip_task *task;
task = kzalloc(sizeof(*task), flags);
if (task)
task->data = data;
return task;
}
EXPORT_SYMBOL(drm_flip_work_allocate_task);
/**
* drm_flip_work_queue_task - queue a specific task
* @work: the flip-work
* @task: the task to handle
*
* Queues task, that will later be run (passed back to drm_flip_func_t
* func) on a work queue after drm_flip_work_commit() is called.
*/
void drm_flip_work_queue_task(struct drm_flip_work *work,
struct drm_flip_task *task)
{
unsigned long flags;
spin_lock_irqsave(&work->lock, flags);
list_add_tail(&task->node, &work->queued);
spin_unlock_irqrestore(&work->lock, flags);
}
EXPORT_SYMBOL(drm_flip_work_queue_task);
/**
* drm_flip_work_queue - queue work
* @work: the flip-work
@ -34,10 +72,14 @@
*/
void drm_flip_work_queue(struct drm_flip_work *work, void *val)
{
if (kfifo_put(&work->fifo, val)) {
atomic_inc(&work->pending);
struct drm_flip_task *task;
task = drm_flip_work_allocate_task(val,
drm_can_sleep() ? GFP_KERNEL : GFP_ATOMIC);
if (task) {
drm_flip_work_queue_task(work, task);
} else {
DRM_ERROR("%s fifo full!\n", work->name);
DRM_ERROR("%s could not allocate task!\n", work->name);
work->func(work, val);
}
}
@ -56,9 +98,12 @@ EXPORT_SYMBOL(drm_flip_work_queue);
void drm_flip_work_commit(struct drm_flip_work *work,
struct workqueue_struct *wq)
{
uint32_t pending = atomic_read(&work->pending);
atomic_add(pending, &work->count);
atomic_sub(pending, &work->pending);
unsigned long flags;
spin_lock_irqsave(&work->lock, flags);
list_splice_tail(&work->queued, &work->commited);
INIT_LIST_HEAD(&work->queued);
spin_unlock_irqrestore(&work->lock, flags);
queue_work(wq, &work->worker);
}
EXPORT_SYMBOL(drm_flip_work_commit);
@ -66,47 +111,46 @@ EXPORT_SYMBOL(drm_flip_work_commit);
static void flip_worker(struct work_struct *w)
{
struct drm_flip_work *work = container_of(w, struct drm_flip_work, worker);
uint32_t count = atomic_read(&work->count);
void *val = NULL;
struct list_head tasks;
unsigned long flags;
atomic_sub(count, &work->count);
while (1) {
struct drm_flip_task *task, *tmp;
while(count--)
if (!WARN_ON(!kfifo_get(&work->fifo, &val)))
work->func(work, val);
INIT_LIST_HEAD(&tasks);
spin_lock_irqsave(&work->lock, flags);
list_splice_tail(&work->commited, &tasks);
INIT_LIST_HEAD(&work->commited);
spin_unlock_irqrestore(&work->lock, flags);
if (list_empty(&tasks))
break;
list_for_each_entry_safe(task, tmp, &tasks, node) {
work->func(work, task->data);
kfree(task);
}
}
}
/**
* drm_flip_work_init - initialize flip-work
* @work: the flip-work to initialize
* @size: the max queue depth
* @name: debug name
* @func: the callback work function
*
* Initializes/allocates resources for the flip-work
*
* RETURNS:
* Zero on success, error code on failure.
*/
int drm_flip_work_init(struct drm_flip_work *work, int size,
void drm_flip_work_init(struct drm_flip_work *work,
const char *name, drm_flip_func_t func)
{
int ret;
work->name = name;
atomic_set(&work->count, 0);
atomic_set(&work->pending, 0);
INIT_LIST_HEAD(&work->queued);
INIT_LIST_HEAD(&work->commited);
spin_lock_init(&work->lock);
work->func = func;
ret = kfifo_alloc(&work->fifo, size, GFP_KERNEL);
if (ret) {
DRM_ERROR("could not allocate %s fifo\n", name);
return ret;
}
INIT_WORK(&work->worker, flip_worker);
return 0;
}
EXPORT_SYMBOL(drm_flip_work_init);
@ -118,7 +162,6 @@ EXPORT_SYMBOL(drm_flip_work_init);
*/
void drm_flip_work_cleanup(struct drm_flip_work *work)
{
WARN_ON(!kfifo_is_empty(&work->fifo));
kfifo_free(&work->fifo);
WARN_ON(!list_empty(&work->queued) || !list_empty(&work->commited));
}
EXPORT_SYMBOL(drm_flip_work_cleanup);

Просмотреть файл

@ -515,16 +515,19 @@ ssize_t drm_read(struct file *filp, char __user *buffer,
size_t total;
ssize_t ret;
ret = wait_event_interruptible(file_priv->event_wait,
!list_empty(&file_priv->event_list));
if (ret < 0)
return ret;
if ((filp->f_flags & O_NONBLOCK) == 0) {
ret = wait_event_interruptible(file_priv->event_wait,
!list_empty(&file_priv->event_list));
if (ret < 0)
return ret;
}
total = 0;
while (drm_dequeue_event(file_priv, total, count, &e)) {
if (copy_to_user(buffer + total,
e->event, e->event->length)) {
total = -EFAULT;
e->destroy(e);
break;
}
@ -532,7 +535,7 @@ ssize_t drm_read(struct file *filp, char __user *buffer,
e->destroy(e);
}
return total;
return total ?: -EAGAIN;
}
EXPORT_SYMBOL(drm_read);

Просмотреть файл

@ -188,7 +188,7 @@ drm_gem_remove_prime_handles(struct drm_gem_object *obj, struct drm_file *filp)
}
/**
* drm_gem_object_free - release resources bound to userspace handles
* drm_gem_object_handle_free - release resources bound to userspace handles
* @obj: GEM object to clean up.
*
* Called after the last handle to the object has been closed
@ -309,7 +309,7 @@ EXPORT_SYMBOL(drm_gem_dumb_destroy);
* drm_gem_handle_create_tail - internal functions to create a handle
* @file_priv: drm file-private structure to register the handle for
* @obj: object to register
* @handlep: pionter to return the created handle to the caller
* @handlep: pointer to return the created handle to the caller
*
* This expects the dev->object_name_lock to be held already and will drop it
* before returning. Used to avoid races in establishing new handles when
@ -362,7 +362,7 @@ drm_gem_handle_create_tail(struct drm_file *file_priv,
}
/**
* gem_handle_create - create a gem handle for an object
* drm_gem_handle_create - create a gem handle for an object
* @file_priv: drm file-private structure to register the handle for
* @obj: object to register
* @handlep: pionter to return the created handle to the caller
@ -371,10 +371,9 @@ drm_gem_handle_create_tail(struct drm_file *file_priv,
* to the object, which includes a regular reference count. Callers
* will likely want to dereference the object afterwards.
*/
int
drm_gem_handle_create(struct drm_file *file_priv,
struct drm_gem_object *obj,
u32 *handlep)
int drm_gem_handle_create(struct drm_file *file_priv,
struct drm_gem_object *obj,
u32 *handlep)
{
mutex_lock(&obj->dev->object_name_lock);

Просмотреть файл

@ -29,18 +29,31 @@
#include <drm/drm_gem_cma_helper.h>
#include <drm/drm_vma_manager.h>
/*
/**
* DOC: cma helpers
*
* The Contiguous Memory Allocator reserves a pool of memory at early boot
* that is used to service requests for large blocks of contiguous memory.
*
* The DRM GEM/CMA helpers use this allocator as a means to provide buffer
* objects that are physically contiguous in memory. This is useful for
* display drivers that are unable to map scattered buffers via an IOMMU.
*/
/**
* __drm_gem_cma_create - Create a GEM CMA object without allocating memory
* @drm: The drm device
* @size: The GEM object size
* @drm: DRM device
* @size: size of the object to allocate
*
* This function creates and initializes a GEM CMA object of the given size, but
* doesn't allocate any memory to back the object.
* This function creates and initializes a GEM CMA object of the given size,
* but doesn't allocate any memory to back the object.
*
* Return a struct drm_gem_cma_object* on success or ERR_PTR values on failure.
* Returns:
* A struct drm_gem_cma_object * on success or an ERR_PTR()-encoded negative
* error code on failure.
*/
static struct drm_gem_cma_object *
__drm_gem_cma_create(struct drm_device *drm, unsigned int size)
__drm_gem_cma_create(struct drm_device *drm, size_t size)
{
struct drm_gem_cma_object *cma_obj;
struct drm_gem_object *gem_obj;
@ -69,14 +82,21 @@ error:
return ERR_PTR(ret);
}
/*
/**
* drm_gem_cma_create - allocate an object with the given size
* @drm: DRM device
* @size: size of the object to allocate
*
* returns a struct drm_gem_cma_object* on success or ERR_PTR values
* on failure.
* This function creates a CMA GEM object and allocates a contiguous chunk of
* memory as backing store. The backing memory has the writecombine attribute
* set.
*
* Returns:
* A struct drm_gem_cma_object * on success or an ERR_PTR()-encoded negative
* error code on failure.
*/
struct drm_gem_cma_object *drm_gem_cma_create(struct drm_device *drm,
unsigned int size)
size_t size)
{
struct drm_gem_cma_object *cma_obj;
int ret;
@ -104,17 +124,26 @@ error:
}
EXPORT_SYMBOL_GPL(drm_gem_cma_create);
/*
* drm_gem_cma_create_with_handle - allocate an object with the given
* size and create a gem handle on it
/**
* drm_gem_cma_create_with_handle - allocate an object with the given size and
* return a GEM handle to it
* @file_priv: DRM file-private structure to register the handle for
* @drm: DRM device
* @size: size of the object to allocate
* @handle: return location for the GEM handle
*
* returns a struct drm_gem_cma_object* on success or ERR_PTR values
* on failure.
* This function creates a CMA GEM object, allocating a physically contiguous
* chunk of memory as backing store. The GEM object is then added to the list
* of object associated with the given file and a handle to it is returned.
*
* Returns:
* A struct drm_gem_cma_object * on success or an ERR_PTR()-encoded negative
* error code on failure.
*/
static struct drm_gem_cma_object *drm_gem_cma_create_with_handle(
struct drm_file *file_priv,
struct drm_device *drm, unsigned int size,
unsigned int *handle)
static struct drm_gem_cma_object *
drm_gem_cma_create_with_handle(struct drm_file *file_priv,
struct drm_device *drm, size_t size,
uint32_t *handle)
{
struct drm_gem_cma_object *cma_obj;
struct drm_gem_object *gem_obj;
@ -145,16 +174,19 @@ err_handle_create:
return ERR_PTR(ret);
}
/*
* drm_gem_cma_free_object - (struct drm_driver)->gem_free_object callback
* function
/**
* drm_gem_cma_free_object - free resources associated with a CMA GEM object
* @gem_obj: GEM object to free
*
* This function frees the backing memory of the CMA GEM object, cleans up the
* GEM object state and frees the memory used to store the object itself.
* Drivers using the CMA helpers should set this as their DRM driver's
* ->gem_free_object() callback.
*/
void drm_gem_cma_free_object(struct drm_gem_object *gem_obj)
{
struct drm_gem_cma_object *cma_obj;
drm_gem_free_mmap_offset(gem_obj);
cma_obj = to_drm_gem_cma_obj(gem_obj);
if (cma_obj->vaddr) {
@ -170,18 +202,26 @@ void drm_gem_cma_free_object(struct drm_gem_object *gem_obj)
}
EXPORT_SYMBOL_GPL(drm_gem_cma_free_object);
/*
* drm_gem_cma_dumb_create - (struct drm_driver)->dumb_create callback
* function
/**
* drm_gem_cma_dumb_create_internal - create a dumb buffer object
* @file_priv: DRM file-private structure to create the dumb buffer for
* @drm: DRM device
* @args: IOCTL data
*
* This aligns the pitch and size arguments to the minimum required. wrap
* this into your own function if you need bigger alignment.
* This aligns the pitch and size arguments to the minimum required. This is
* an internal helper that can be wrapped by a driver to account for hardware
* with more specific alignment requirements. It should not be used directly
* as the ->dumb_create() callback in a DRM driver.
*
* Returns:
* 0 on success or a negative error code on failure.
*/
int drm_gem_cma_dumb_create(struct drm_file *file_priv,
struct drm_device *dev, struct drm_mode_create_dumb *args)
int drm_gem_cma_dumb_create_internal(struct drm_file *file_priv,
struct drm_device *drm,
struct drm_mode_create_dumb *args)
{
unsigned int min_pitch = DIV_ROUND_UP(args->width * args->bpp, 8);
struct drm_gem_cma_object *cma_obj;
int min_pitch = DIV_ROUND_UP(args->width * args->bpp, 8);
if (args->pitch < min_pitch)
args->pitch = min_pitch;
@ -189,18 +229,63 @@ int drm_gem_cma_dumb_create(struct drm_file *file_priv,
if (args->size < args->pitch * args->height)
args->size = args->pitch * args->height;
cma_obj = drm_gem_cma_create_with_handle(file_priv, dev,
args->size, &args->handle);
cma_obj = drm_gem_cma_create_with_handle(file_priv, drm, args->size,
&args->handle);
return PTR_ERR_OR_ZERO(cma_obj);
}
EXPORT_SYMBOL_GPL(drm_gem_cma_dumb_create_internal);
/**
* drm_gem_cma_dumb_create - create a dumb buffer object
* @file_priv: DRM file-private structure to create the dumb buffer for
* @drm: DRM device
* @args: IOCTL data
*
* This function computes the pitch of the dumb buffer and rounds it up to an
* integer number of bytes per pixel. Drivers for hardware that doesn't have
* any additional restrictions on the pitch can directly use this function as
* their ->dumb_create() callback.
*
* For hardware with additional restrictions, drivers can adjust the fields
* set up by userspace and pass the IOCTL data along to the
* drm_gem_cma_dumb_create_internal() function.
*
* Returns:
* 0 on success or a negative error code on failure.
*/
int drm_gem_cma_dumb_create(struct drm_file *file_priv,
struct drm_device *drm,
struct drm_mode_create_dumb *args)
{
struct drm_gem_cma_object *cma_obj;
args->pitch = DIV_ROUND_UP(args->width * args->bpp, 8);
args->size = args->pitch * args->height;
cma_obj = drm_gem_cma_create_with_handle(file_priv, drm, args->size,
&args->handle);
return PTR_ERR_OR_ZERO(cma_obj);
}
EXPORT_SYMBOL_GPL(drm_gem_cma_dumb_create);
/*
* drm_gem_cma_dumb_map_offset - (struct drm_driver)->dumb_map_offset callback
* function
/**
* drm_gem_cma_dumb_map_offset - return the fake mmap offset for a CMA GEM
* object
* @file_priv: DRM file-private structure containing the GEM object
* @drm: DRM device
* @handle: GEM object handle
* @offset: return location for the fake mmap offset
*
* This function look up an object by its handle and returns the fake mmap
* offset associated with it. Drivers using the CMA helpers should set this
* as their DRM driver's ->dumb_map_offset() callback.
*
* Returns:
* 0 on success or a negative error code on failure.
*/
int drm_gem_cma_dumb_map_offset(struct drm_file *file_priv,
struct drm_device *drm, uint32_t handle, uint64_t *offset)
struct drm_device *drm, u32 handle,
u64 *offset)
{
struct drm_gem_object *gem_obj;
@ -208,7 +293,7 @@ int drm_gem_cma_dumb_map_offset(struct drm_file *file_priv,
gem_obj = drm_gem_object_lookup(drm, file_priv, handle);
if (!gem_obj) {
dev_err(drm->dev, "failed to lookup gem object\n");
dev_err(drm->dev, "failed to lookup GEM object\n");
mutex_unlock(&drm->struct_mutex);
return -EINVAL;
}
@ -251,8 +336,20 @@ static int drm_gem_cma_mmap_obj(struct drm_gem_cma_object *cma_obj,
return ret;
}
/*
* drm_gem_cma_mmap - (struct file_operation)->mmap callback function
/**
* drm_gem_cma_mmap - memory-map a CMA GEM object
* @filp: file object
* @vma: VMA for the area to be mapped
*
* This function implements an augmented version of the GEM DRM file mmap
* operation for CMA objects: In addition to the usual GEM VMA setup it
* immediately faults in the entire object instead of using on-demaind
* faulting. Drivers which employ the CMA helpers should use this function
* as their ->mmap() handler in the DRM device file's file_operations
* structure.
*
* Returns:
* 0 on success or a negative error code on failure.
*/
int drm_gem_cma_mmap(struct file *filp, struct vm_area_struct *vma)
{
@ -272,7 +369,16 @@ int drm_gem_cma_mmap(struct file *filp, struct vm_area_struct *vma)
EXPORT_SYMBOL_GPL(drm_gem_cma_mmap);
#ifdef CONFIG_DEBUG_FS
void drm_gem_cma_describe(struct drm_gem_cma_object *cma_obj, struct seq_file *m)
/**
* drm_gem_cma_describe - describe a CMA GEM object for debugfs
* @cma_obj: CMA GEM object
* @m: debugfs file handle
*
* This function can be used to dump a human-readable representation of the
* CMA GEM object into a synthetic file.
*/
void drm_gem_cma_describe(struct drm_gem_cma_object *cma_obj,
struct seq_file *m)
{
struct drm_gem_object *obj = &cma_obj->base;
struct drm_device *dev = obj->dev;
@ -291,7 +397,18 @@ void drm_gem_cma_describe(struct drm_gem_cma_object *cma_obj, struct seq_file *m
EXPORT_SYMBOL_GPL(drm_gem_cma_describe);
#endif
/* low-level interface prime helpers */
/**
* drm_gem_cma_prime_get_sg_table - provide a scatter/gather table of pinned
* pages for a CMA GEM object
* @obj: GEM object
*
* This function exports a scatter/gather table suitable for PRIME usage by
* calling the standard DMA mapping API. Drivers using the CMA helpers should
* set this as their DRM driver's ->gem_prime_get_sg_table() callback.
*
* Returns:
* A pointer to the scatter/gather table of pinned pages or NULL on failure.
*/
struct sg_table *drm_gem_cma_prime_get_sg_table(struct drm_gem_object *obj)
{
struct drm_gem_cma_object *cma_obj = to_drm_gem_cma_obj(obj);
@ -315,6 +432,23 @@ out:
}
EXPORT_SYMBOL_GPL(drm_gem_cma_prime_get_sg_table);
/**
* drm_gem_cma_prime_import_sg_table - produce a CMA GEM object from another
* driver's scatter/gather table of pinned pages
* @dev: device to import into
* @attach: DMA-BUF attachment
* @sgt: scatter/gather table of pinned pages
*
* This function imports a scatter/gather table exported via DMA-BUF by
* another driver. Imported buffers must be physically contiguous in memory
* (i.e. the scatter/gather table must contain a single entry). Drivers that
* use the CMA helpers should set this as their DRM driver's
* ->gem_prime_import_sg_table() callback.
*
* Returns:
* A pointer to a newly created GEM object or an ERR_PTR-encoded negative
* error code on failure.
*/
struct drm_gem_object *
drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
struct dma_buf_attachment *attach,
@ -339,6 +473,18 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
}
EXPORT_SYMBOL_GPL(drm_gem_cma_prime_import_sg_table);
/**
* drm_gem_cma_prime_mmap - memory-map an exported CMA GEM object
* @obj: GEM object
* @vma: VMA for the area to be mapped
*
* This function maps a buffer imported via DRM PRIME into a userspace
* process's address space. Drivers that use the CMA helpers should set this
* as their DRM driver's ->gem_prime_mmap() callback.
*
* Returns:
* 0 on success or a negative error code on failure.
*/
int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
struct vm_area_struct *vma)
{
@ -357,6 +503,20 @@ int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
}
EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
/**
* drm_gem_cma_prime_vmap - map a CMA GEM object into the kernel's virtual
* address space
* @obj: GEM object
*
* This function maps a buffer exported via DRM PRIME into the kernel's
* virtual address space. Since the CMA buffers are already mapped into the
* kernel virtual address space this simply returns the cached virtual
* address. Drivers using the CMA helpers should set this as their DRM
* driver's ->gem_prime_vmap() callback.
*
* Returns:
* The kernel virtual address of the CMA GEM object's backing store.
*/
void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj)
{
struct drm_gem_cma_object *cma_obj = to_drm_gem_cma_obj(obj);
@ -365,6 +525,17 @@ void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj)
}
EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap);
/**
* drm_gem_cma_prime_vunmap - unmap a CMA GEM object from the kernel's virtual
* address space
* @obj: GEM object
* @vaddr: kernel virtual address where the CMA GEM object was mapped
*
* This function removes a buffer exported via DRM PRIME from the kernel's
* virtual address space. This is a no-op because CMA buffers cannot be
* unmapped from kernel space. Drivers using the CMA helpers should set this
* as their DRM driver's ->gem_prime_vunmap() callback.
*/
void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
{
/* Nothing to do */

Просмотреть файл

@ -166,7 +166,7 @@ static void vblank_disable_and_save(struct drm_device *dev, int crtc)
spin_lock_irqsave(&dev->vblank_time_lock, irqflags);
/*
* If the vblank interrupt was already disbled update the count
* If the vblank interrupt was already disabled update the count
* and timestamp to maintain the appearance that the counter
* has been ticking all along until this time. This makes the
* count account for the entire time between drm_vblank_on() and
@ -1029,7 +1029,8 @@ void drm_vblank_put(struct drm_device *dev, int crtc)
{
struct drm_vblank_crtc *vblank = &dev->vblank[crtc];
BUG_ON(atomic_read(&vblank->refcount) == 0);
if (WARN_ON(atomic_read(&vblank->refcount) == 0))
return;
if (WARN_ON(crtc >= dev->num_crtcs))
return;
@ -1190,7 +1191,7 @@ EXPORT_SYMBOL(drm_crtc_vblank_off);
*
* This functions restores the vblank interrupt state captured with
* drm_vblank_off() again. Note that calls to drm_vblank_on() and
* drm_vblank_off() can be unbalanced and so can also be unconditionaly called
* drm_vblank_off() can be unbalanced and so can also be unconditionally called
* in driver load code to reflect the current hardware state of the crtc.
*
* This is the legacy version of drm_crtc_vblank_on().
@ -1237,7 +1238,7 @@ EXPORT_SYMBOL(drm_vblank_on);
*
* This functions restores the vblank interrupt state captured with
* drm_vblank_off() again. Note that calls to drm_vblank_on() and
* drm_vblank_off() can be unbalanced and so can also be unconditionaly called
* drm_vblank_off() can be unbalanced and so can also be unconditionally called
* in driver load code to reflect the current hardware state of the crtc.
*
* This is the native kms version of drm_vblank_on().

Просмотреть файл

@ -35,6 +35,16 @@
#include <video/mipi_display.h>
/**
* DOC: dsi helpers
*
* These functions contain some common logic and helpers to deal with MIPI DSI
* peripherals.
*
* Helpers are provided for a number of standard MIPI DSI command as well as a
* subset of the MIPI DCS command set.
*/
static int mipi_dsi_device_match(struct device *dev, struct device_driver *drv)
{
return of_driver_match_device(dev, drv);
@ -57,6 +67,29 @@ static struct bus_type mipi_dsi_bus_type = {
.pm = &mipi_dsi_device_pm_ops,
};
static int of_device_match(struct device *dev, void *data)
{
return dev->of_node == data;
}
/**
* of_find_mipi_dsi_device_by_node() - find the MIPI DSI device matching a
* device tree node
* @np: device tree node
*
* Return: A pointer to the MIPI DSI device corresponding to @np or NULL if no
* such device exists (or has not been registered yet).
*/
struct mipi_dsi_device *of_find_mipi_dsi_device_by_node(struct device_node *np)
{
struct device *dev;
dev = bus_find_device(&mipi_dsi_bus_type, NULL, np, of_device_match);
return dev ? to_mipi_dsi_device(dev) : NULL;
}
EXPORT_SYMBOL(of_find_mipi_dsi_device_by_node);
static void mipi_dsi_dev_release(struct device *dev)
{
struct mipi_dsi_device *dsi = to_mipi_dsi_device(dev);
@ -198,59 +231,351 @@ int mipi_dsi_detach(struct mipi_dsi_device *dsi)
}
EXPORT_SYMBOL(mipi_dsi_detach);
/**
* mipi_dsi_dcs_write - send DCS write command
* @dsi: DSI device
* @data: pointer to the command followed by parameters
* @len: length of @data
*/
ssize_t mipi_dsi_dcs_write(struct mipi_dsi_device *dsi, const void *data,
size_t len)
static ssize_t mipi_dsi_device_transfer(struct mipi_dsi_device *dsi,
struct mipi_dsi_msg *msg)
{
const struct mipi_dsi_host_ops *ops = dsi->host->ops;
if (!ops || !ops->transfer)
return -ENOSYS;
if (dsi->mode_flags & MIPI_DSI_MODE_LPM)
msg->flags |= MIPI_DSI_MSG_USE_LPM;
return ops->transfer(dsi->host, msg);
}
/**
* mipi_dsi_packet_format_is_short - check if a packet is of the short format
* @type: MIPI DSI data type of the packet
*
* Return: true if the packet for the given data type is a short packet, false
* otherwise.
*/
bool mipi_dsi_packet_format_is_short(u8 type)
{
switch (type) {
case MIPI_DSI_V_SYNC_START:
case MIPI_DSI_V_SYNC_END:
case MIPI_DSI_H_SYNC_START:
case MIPI_DSI_H_SYNC_END:
case MIPI_DSI_END_OF_TRANSMISSION:
case MIPI_DSI_COLOR_MODE_OFF:
case MIPI_DSI_COLOR_MODE_ON:
case MIPI_DSI_SHUTDOWN_PERIPHERAL:
case MIPI_DSI_TURN_ON_PERIPHERAL:
case MIPI_DSI_GENERIC_SHORT_WRITE_0_PARAM:
case MIPI_DSI_GENERIC_SHORT_WRITE_1_PARAM:
case MIPI_DSI_GENERIC_SHORT_WRITE_2_PARAM:
case MIPI_DSI_GENERIC_READ_REQUEST_0_PARAM:
case MIPI_DSI_GENERIC_READ_REQUEST_1_PARAM:
case MIPI_DSI_GENERIC_READ_REQUEST_2_PARAM:
case MIPI_DSI_DCS_SHORT_WRITE:
case MIPI_DSI_DCS_SHORT_WRITE_PARAM:
case MIPI_DSI_DCS_READ:
case MIPI_DSI_SET_MAXIMUM_RETURN_PACKET_SIZE:
return true;
}
return false;
}
EXPORT_SYMBOL(mipi_dsi_packet_format_is_short);
/**
* mipi_dsi_packet_format_is_long - check if a packet is of the long format
* @type: MIPI DSI data type of the packet
*
* Return: true if the packet for the given data type is a long packet, false
* otherwise.
*/
bool mipi_dsi_packet_format_is_long(u8 type)
{
switch (type) {
case MIPI_DSI_NULL_PACKET:
case MIPI_DSI_BLANKING_PACKET:
case MIPI_DSI_GENERIC_LONG_WRITE:
case MIPI_DSI_DCS_LONG_WRITE:
case MIPI_DSI_LOOSELY_PACKED_PIXEL_STREAM_YCBCR20:
case MIPI_DSI_PACKED_PIXEL_STREAM_YCBCR24:
case MIPI_DSI_PACKED_PIXEL_STREAM_YCBCR16:
case MIPI_DSI_PACKED_PIXEL_STREAM_30:
case MIPI_DSI_PACKED_PIXEL_STREAM_36:
case MIPI_DSI_PACKED_PIXEL_STREAM_YCBCR12:
case MIPI_DSI_PACKED_PIXEL_STREAM_16:
case MIPI_DSI_PACKED_PIXEL_STREAM_18:
case MIPI_DSI_PIXEL_STREAM_3BYTE_18:
case MIPI_DSI_PACKED_PIXEL_STREAM_24:
return true;
}
return false;
}
EXPORT_SYMBOL(mipi_dsi_packet_format_is_long);
/**
* mipi_dsi_create_packet - create a packet from a message according to the
* DSI protocol
* @packet: pointer to a DSI packet structure
* @msg: message to translate into a packet
*
* Return: 0 on success or a negative error code on failure.
*/
int mipi_dsi_create_packet(struct mipi_dsi_packet *packet,
const struct mipi_dsi_msg *msg)
{
const u8 *tx = msg->tx_buf;
if (!packet || !msg)
return -EINVAL;
/* do some minimum sanity checking */
if (!mipi_dsi_packet_format_is_short(msg->type) &&
!mipi_dsi_packet_format_is_long(msg->type))
return -EINVAL;
if (msg->channel > 3)
return -EINVAL;
memset(packet, 0, sizeof(*packet));
packet->header[0] = ((msg->channel & 0x3) << 6) | (msg->type & 0x3f);
/* TODO: compute ECC if hardware support is not available */
/*
* Long write packets contain the word count in header bytes 1 and 2.
* The payload follows the header and is word count bytes long.
*
* Short write packets encode up to two parameters in header bytes 1
* and 2.
*/
if (mipi_dsi_packet_format_is_long(msg->type)) {
packet->header[1] = (msg->tx_len >> 0) & 0xff;
packet->header[2] = (msg->tx_len >> 8) & 0xff;
packet->payload_length = msg->tx_len;
packet->payload = tx;
} else {
packet->header[1] = (msg->tx_len > 0) ? tx[0] : 0;
packet->header[2] = (msg->tx_len > 1) ? tx[1] : 0;
}
packet->size = sizeof(packet->header) + packet->payload_length;
return 0;
}
EXPORT_SYMBOL(mipi_dsi_create_packet);
/*
* mipi_dsi_set_maximum_return_packet_size() - specify the maximum size of the
* the payload in a long packet transmitted from the peripheral back to the
* host processor
* @dsi: DSI peripheral device
* @value: the maximum size of the payload
*
* Return: 0 on success or a negative error code on failure.
*/
int mipi_dsi_set_maximum_return_packet_size(struct mipi_dsi_device *dsi,
u16 value)
{
u8 tx[2] = { value & 0xff, value >> 8 };
struct mipi_dsi_msg msg = {
.channel = dsi->channel,
.type = MIPI_DSI_SET_MAXIMUM_RETURN_PACKET_SIZE,
.tx_len = sizeof(tx),
.tx_buf = tx,
};
return mipi_dsi_device_transfer(dsi, &msg);
}
EXPORT_SYMBOL(mipi_dsi_set_maximum_return_packet_size);
/**
* mipi_dsi_generic_write() - transmit data using a generic write packet
* @dsi: DSI peripheral device
* @payload: buffer containing the payload
* @size: size of payload buffer
*
* This function will automatically choose the right data type depending on
* the payload length.
*
* Return: The number of bytes transmitted on success or a negative error code
* on failure.
*/
ssize_t mipi_dsi_generic_write(struct mipi_dsi_device *dsi, const void *payload,
size_t size)
{
struct mipi_dsi_msg msg = {
.channel = dsi->channel,
.tx_buf = payload,
.tx_len = size
};
switch (size) {
case 0:
msg.type = MIPI_DSI_GENERIC_SHORT_WRITE_0_PARAM;
break;
case 1:
msg.type = MIPI_DSI_GENERIC_SHORT_WRITE_1_PARAM;
break;
case 2:
msg.type = MIPI_DSI_GENERIC_SHORT_WRITE_2_PARAM;
break;
default:
msg.type = MIPI_DSI_GENERIC_LONG_WRITE;
break;
}
return mipi_dsi_device_transfer(dsi, &msg);
}
EXPORT_SYMBOL(mipi_dsi_generic_write);
/**
* mipi_dsi_generic_read() - receive data using a generic read packet
* @dsi: DSI peripheral device
* @params: buffer containing the request parameters
* @num_params: number of request parameters
* @data: buffer in which to return the received data
* @size: size of receive buffer
*
* This function will automatically choose the right data type depending on
* the number of parameters passed in.
*
* Return: The number of bytes successfully read or a negative error code on
* failure.
*/
ssize_t mipi_dsi_generic_read(struct mipi_dsi_device *dsi, const void *params,
size_t num_params, void *data, size_t size)
{
struct mipi_dsi_msg msg = {
.channel = dsi->channel,
.tx_len = num_params,
.tx_buf = params,
.rx_len = size,
.rx_buf = data
};
switch (num_params) {
case 0:
msg.type = MIPI_DSI_GENERIC_READ_REQUEST_0_PARAM;
break;
case 1:
msg.type = MIPI_DSI_GENERIC_READ_REQUEST_1_PARAM;
break;
case 2:
msg.type = MIPI_DSI_GENERIC_READ_REQUEST_2_PARAM;
break;
default:
return -EINVAL;
}
return mipi_dsi_device_transfer(dsi, &msg);
}
EXPORT_SYMBOL(mipi_dsi_generic_read);
/**
* mipi_dsi_dcs_write_buffer() - transmit a DCS command with payload
* @dsi: DSI peripheral device
* @data: buffer containing data to be transmitted
* @len: size of transmission buffer
*
* This function will automatically choose the right data type depending on
* the command payload length.
*
* Return: The number of bytes successfully transmitted or a negative error
* code on failure.
*/
ssize_t mipi_dsi_dcs_write_buffer(struct mipi_dsi_device *dsi,
const void *data, size_t len)
{
struct mipi_dsi_msg msg = {
.channel = dsi->channel,
.tx_buf = data,
.tx_len = len
};
if (!ops || !ops->transfer)
return -ENOSYS;
switch (len) {
case 0:
return -EINVAL;
case 1:
msg.type = MIPI_DSI_DCS_SHORT_WRITE;
break;
case 2:
msg.type = MIPI_DSI_DCS_SHORT_WRITE_PARAM;
break;
default:
msg.type = MIPI_DSI_DCS_LONG_WRITE;
break;
}
if (dsi->mode_flags & MIPI_DSI_MODE_LPM)
msg.flags = MIPI_DSI_MSG_USE_LPM;
return mipi_dsi_device_transfer(dsi, &msg);
}
EXPORT_SYMBOL(mipi_dsi_dcs_write_buffer);
return ops->transfer(dsi->host, &msg);
/**
* mipi_dsi_dcs_write() - send DCS write command
* @dsi: DSI peripheral device
* @cmd: DCS command
* @data: buffer containing the command payload
* @len: command payload length
*
* This function will automatically choose the right data type depending on
* the command payload length.
*
* Return: The number of bytes successfully transmitted or a negative error
* code on failure.
*/
ssize_t mipi_dsi_dcs_write(struct mipi_dsi_device *dsi, u8 cmd,
const void *data, size_t len)
{
ssize_t err;
size_t size;
u8 *tx;
if (len > 0) {
size = 1 + len;
tx = kmalloc(size, GFP_KERNEL);
if (!tx)
return -ENOMEM;
/* concatenate the DCS command byte and the payload */
tx[0] = cmd;
memcpy(&tx[1], data, len);
} else {
tx = &cmd;
size = 1;
}
err = mipi_dsi_dcs_write_buffer(dsi, tx, size);
if (len > 0)
kfree(tx);
return err;
}
EXPORT_SYMBOL(mipi_dsi_dcs_write);
/**
* mipi_dsi_dcs_read - send DCS read request command
* @dsi: DSI device
* @cmd: DCS read command
* @data: pointer to read buffer
* @len: length of @data
* mipi_dsi_dcs_read() - send DCS read request command
* @dsi: DSI peripheral device
* @cmd: DCS command
* @data: buffer in which to receive data
* @len: size of receive buffer
*
* Function returns number of read bytes or error code.
* Return: The number of bytes read or a negative error code on failure.
*/
ssize_t mipi_dsi_dcs_read(struct mipi_dsi_device *dsi, u8 cmd, void *data,
size_t len)
{
const struct mipi_dsi_host_ops *ops = dsi->host->ops;
struct mipi_dsi_msg msg = {
.channel = dsi->channel,
.type = MIPI_DSI_DCS_READ,
@ -260,16 +585,283 @@ ssize_t mipi_dsi_dcs_read(struct mipi_dsi_device *dsi, u8 cmd, void *data,
.rx_len = len
};
if (!ops || !ops->transfer)
return -ENOSYS;
if (dsi->mode_flags & MIPI_DSI_MODE_LPM)
msg.flags = MIPI_DSI_MSG_USE_LPM;
return ops->transfer(dsi->host, &msg);
return mipi_dsi_device_transfer(dsi, &msg);
}
EXPORT_SYMBOL(mipi_dsi_dcs_read);
/**
* mipi_dsi_dcs_nop() - send DCS nop packet
* @dsi: DSI peripheral device
*
* Return: 0 on success or a negative error code on failure.
*/
int mipi_dsi_dcs_nop(struct mipi_dsi_device *dsi)
{
ssize_t err;
err = mipi_dsi_dcs_write(dsi, MIPI_DCS_NOP, NULL, 0);
if (err < 0)
return err;
return 0;
}
EXPORT_SYMBOL(mipi_dsi_dcs_nop);
/**
* mipi_dsi_dcs_soft_reset() - perform a software reset of the display module
* @dsi: DSI peripheral device
*
* Return: 0 on success or a negative error code on failure.
*/
int mipi_dsi_dcs_soft_reset(struct mipi_dsi_device *dsi)
{
ssize_t err;
err = mipi_dsi_dcs_write(dsi, MIPI_DCS_SOFT_RESET, NULL, 0);
if (err < 0)
return err;
return 0;
}
EXPORT_SYMBOL(mipi_dsi_dcs_soft_reset);
/**
* mipi_dsi_dcs_get_power_mode() - query the display module's current power
* mode
* @dsi: DSI peripheral device
* @mode: return location for the current power mode
*
* Return: 0 on success or a negative error code on failure.
*/
int mipi_dsi_dcs_get_power_mode(struct mipi_dsi_device *dsi, u8 *mode)
{
ssize_t err;
err = mipi_dsi_dcs_read(dsi, MIPI_DCS_GET_POWER_MODE, mode,
sizeof(*mode));
if (err <= 0) {
if (err == 0)
err = -ENODATA;
return err;
}
return 0;
}
EXPORT_SYMBOL(mipi_dsi_dcs_get_power_mode);
/**
* mipi_dsi_dcs_get_pixel_format() - gets the pixel format for the RGB image
* data used by the interface
* @dsi: DSI peripheral device
* @format: return location for the pixel format
*
* Return: 0 on success or a negative error code on failure.
*/
int mipi_dsi_dcs_get_pixel_format(struct mipi_dsi_device *dsi, u8 *format)
{
ssize_t err;
err = mipi_dsi_dcs_read(dsi, MIPI_DCS_GET_PIXEL_FORMAT, format,
sizeof(*format));
if (err <= 0) {
if (err == 0)
err = -ENODATA;
return err;
}
return 0;
}
EXPORT_SYMBOL(mipi_dsi_dcs_get_pixel_format);
/**
* mipi_dsi_dcs_enter_sleep_mode() - disable all unnecessary blocks inside the
* display module except interface communication
* @dsi: DSI peripheral device
*
* Return: 0 on success or a negative error code on failure.
*/
int mipi_dsi_dcs_enter_sleep_mode(struct mipi_dsi_device *dsi)
{
ssize_t err;
err = mipi_dsi_dcs_write(dsi, MIPI_DCS_ENTER_SLEEP_MODE, NULL, 0);
if (err < 0)
return err;
return 0;
}
EXPORT_SYMBOL(mipi_dsi_dcs_enter_sleep_mode);
/**
* mipi_dsi_dcs_exit_sleep_mode() - enable all blocks inside the display
* module
* @dsi: DSI peripheral device
*
* Return: 0 on success or a negative error code on failure.
*/
int mipi_dsi_dcs_exit_sleep_mode(struct mipi_dsi_device *dsi)
{
ssize_t err;
err = mipi_dsi_dcs_write(dsi, MIPI_DCS_EXIT_SLEEP_MODE, NULL, 0);
if (err < 0)
return err;
return 0;
}
EXPORT_SYMBOL(mipi_dsi_dcs_exit_sleep_mode);
/**
* mipi_dsi_dcs_set_display_off() - stop displaying the image data on the
* display device
* @dsi: DSI peripheral device
*
* Return: 0 on success or a negative error code on failure.
*/
int mipi_dsi_dcs_set_display_off(struct mipi_dsi_device *dsi)
{
ssize_t err;
err = mipi_dsi_dcs_write(dsi, MIPI_DCS_SET_DISPLAY_OFF, NULL, 0);
if (err < 0)
return err;
return 0;
}
EXPORT_SYMBOL(mipi_dsi_dcs_set_display_off);
/**
* mipi_dsi_dcs_set_display_on() - start displaying the image data on the
* display device
* @dsi: DSI peripheral device
*
* Return: 0 on success or a negative error code on failure
*/
int mipi_dsi_dcs_set_display_on(struct mipi_dsi_device *dsi)
{
ssize_t err;
err = mipi_dsi_dcs_write(dsi, MIPI_DCS_SET_DISPLAY_ON, NULL, 0);
if (err < 0)
return err;
return 0;
}
EXPORT_SYMBOL(mipi_dsi_dcs_set_display_on);
/**
* mipi_dsi_dcs_set_column_address() - define the column extent of the frame
* memory accessed by the host processor
* @dsi: DSI peripheral device
* @start: first column of frame memory
* @end: last column of frame memory
*
* Return: 0 on success or a negative error code on failure.
*/
int mipi_dsi_dcs_set_column_address(struct mipi_dsi_device *dsi, u16 start,
u16 end)
{
u8 payload[4] = { start >> 8, start & 0xff, end >> 8, end & 0xff };
ssize_t err;
err = mipi_dsi_dcs_write(dsi, MIPI_DCS_SET_COLUMN_ADDRESS, payload,
sizeof(payload));
if (err < 0)
return err;
return 0;
}
EXPORT_SYMBOL(mipi_dsi_dcs_set_column_address);
/**
* mipi_dsi_dcs_set_page_address() - define the page extent of the frame
* memory accessed by the host processor
* @dsi: DSI peripheral device
* @start: first page of frame memory
* @end: last page of frame memory
*
* Return: 0 on success or a negative error code on failure.
*/
int mipi_dsi_dcs_set_page_address(struct mipi_dsi_device *dsi, u16 start,
u16 end)
{
u8 payload[4] = { start >> 8, start & 0xff, end >> 8, end & 0xff };
ssize_t err;
err = mipi_dsi_dcs_write(dsi, MIPI_DCS_SET_PAGE_ADDRESS, payload,
sizeof(payload));
if (err < 0)
return err;
return 0;
}
EXPORT_SYMBOL(mipi_dsi_dcs_set_page_address);
/**
* mipi_dsi_dcs_set_tear_off() - turn off the display module's Tearing Effect
* output signal on the TE signal line
* @dsi: DSI peripheral device
*
* Return: 0 on success or a negative error code on failure
*/
int mipi_dsi_dcs_set_tear_off(struct mipi_dsi_device *dsi)
{
ssize_t err;
err = mipi_dsi_dcs_write(dsi, MIPI_DCS_SET_TEAR_OFF, NULL, 0);
if (err < 0)
return err;
return 0;
}
EXPORT_SYMBOL(mipi_dsi_dcs_set_tear_off);
/**
* mipi_dsi_dcs_set_tear_on() - turn on the display module's Tearing Effect
* output signal on the TE signal line.
* @dsi: DSI peripheral device
* @mode: the Tearing Effect Output Line mode
*
* Return: 0 on success or a negative error code on failure
*/
int mipi_dsi_dcs_set_tear_on(struct mipi_dsi_device *dsi,
enum mipi_dsi_dcs_tear_mode mode)
{
u8 value = mode;
ssize_t err;
err = mipi_dsi_dcs_write(dsi, MIPI_DCS_SET_TEAR_ON, &value,
sizeof(value));
if (err < 0)
return err;
return 0;
}
EXPORT_SYMBOL(mipi_dsi_dcs_set_tear_on);
/**
* mipi_dsi_dcs_set_pixel_format() - sets the pixel format for the RGB image
* data used by the interface
* @dsi: DSI peripheral device
* @format: pixel format
*
* Return: 0 on success or a negative error code on failure.
*/
int mipi_dsi_dcs_set_pixel_format(struct mipi_dsi_device *dsi, u8 format)
{
ssize_t err;
err = mipi_dsi_dcs_write(dsi, MIPI_DCS_SET_PIXEL_FORMAT, &format,
sizeof(format));
if (err < 0)
return err;
return 0;
}
EXPORT_SYMBOL(mipi_dsi_dcs_set_pixel_format);
static int mipi_dsi_drv_probe(struct device *dev)
{
struct mipi_dsi_driver *drv = to_mipi_dsi_driver(dev->driver);
@ -295,12 +887,18 @@ static void mipi_dsi_drv_shutdown(struct device *dev)
}
/**
* mipi_dsi_driver_register - register a driver for DSI devices
* mipi_dsi_driver_register_full() - register a driver for DSI devices
* @drv: DSI driver structure
* @owner: owner module
*
* Return: 0 on success or a negative error code on failure.
*/
int mipi_dsi_driver_register(struct mipi_dsi_driver *drv)
int mipi_dsi_driver_register_full(struct mipi_dsi_driver *drv,
struct module *owner)
{
drv->driver.bus = &mipi_dsi_bus_type;
drv->driver.owner = owner;
if (drv->probe)
drv->driver.probe = mipi_dsi_drv_probe;
if (drv->remove)
@ -310,11 +908,13 @@ int mipi_dsi_driver_register(struct mipi_dsi_driver *drv)
return driver_register(&drv->driver);
}
EXPORT_SYMBOL(mipi_dsi_driver_register);
EXPORT_SYMBOL(mipi_dsi_driver_register_full);
/**
* mipi_dsi_driver_unregister - unregister a driver for DSI devices
* mipi_dsi_driver_unregister() - unregister a driver for DSI devices
* @drv: DSI driver structure
*
* Return: 0 on success or a negative error code on failure.
*/
void mipi_dsi_driver_unregister(struct mipi_dsi_driver *drv)
{

Просмотреть файл

@ -914,7 +914,7 @@ EXPORT_SYMBOL(drm_mode_equal_no_clocks_no_stereo);
*
* This function is a helper which can be used to validate modes against size
* limitations of the DRM device/connector. If a mode is too big its status
* memeber is updated with the appropriate validation failure code. The list
* member is updated with the appropriate validation failure code. The list
* itself is not changed.
*/
void drm_mode_validate_size(struct drm_device *dev,

Просмотреть файл

@ -157,14 +157,20 @@ void drm_modeset_unlock_all(struct drm_device *dev)
EXPORT_SYMBOL(drm_modeset_unlock_all);
/**
* drm_modeset_lock_crtc - lock crtc with hidden acquire ctx
* @crtc: drm crtc
* drm_modeset_lock_crtc - lock crtc with hidden acquire ctx for a plane update
* @crtc: DRM CRTC
* @plane: DRM plane to be updated on @crtc
*
* This function locks the given crtc using a hidden acquire context. This is
* necessary so that drivers internally using the atomic interfaces can grab
* further locks with the lock acquire context.
* This function locks the given crtc and plane (which should be either the
* primary or cursor plane) using a hidden acquire context. This is necessary so
* that drivers internally using the atomic interfaces can grab further locks
* with the lock acquire context.
*
* Note that @plane can be NULL, e.g. when the cursor support hasn't yet been
* converted to universal planes yet.
*/
void drm_modeset_lock_crtc(struct drm_crtc *crtc)
void drm_modeset_lock_crtc(struct drm_crtc *crtc,
struct drm_plane *plane)
{
struct drm_modeset_acquire_ctx *ctx;
int ret;
@ -180,6 +186,18 @@ retry:
if (ret)
goto fail;
if (plane) {
ret = drm_modeset_lock(&plane->mutex, ctx);
if (ret)
goto fail;
if (plane->crtc) {
ret = drm_modeset_lock(&plane->crtc->mutex, ctx);
if (ret)
goto fail;
}
}
WARN_ON(crtc->acquire_ctx);
/* now we hold the locks, so now that it is safe, stash the
@ -437,15 +455,14 @@ void drm_modeset_unlock(struct drm_modeset_lock *lock)
}
EXPORT_SYMBOL(drm_modeset_unlock);
/* Temporary.. until we have sufficiently fine grained locking, there
* are a couple scenarios where it is convenient to grab all crtc locks.
* It is planned to remove this:
*/
/* In some legacy codepaths it's convenient to just grab all the crtc and plane
* related locks. */
int drm_modeset_lock_all_crtcs(struct drm_device *dev,
struct drm_modeset_acquire_ctx *ctx)
{
struct drm_mode_config *config = &dev->mode_config;
struct drm_crtc *crtc;
struct drm_plane *plane;
int ret = 0;
list_for_each_entry(crtc, &config->crtc_list, head) {
@ -454,6 +471,12 @@ int drm_modeset_lock_all_crtcs(struct drm_device *dev,
return ret;
}
list_for_each_entry(plane, &config->plane_list, head) {
ret = drm_modeset_lock(&plane->mutex, ctx);
if (ret)
return ret;
}
return 0;
}
EXPORT_SYMBOL(drm_modeset_lock_all_crtcs);

Просмотреть файл

@ -27,10 +27,38 @@
#include <drm/drmP.h>
#include <drm/drm_plane_helper.h>
#include <drm/drm_rect.h>
#include <drm/drm_plane_helper.h>
#include <drm/drm_atomic.h>
#include <drm/drm_crtc_helper.h>
#include <drm/drm_atomic_helper.h>
#define SUBPIXEL_MASK 0xffff
/**
* DOC: overview
*
* This helper library has two parts. The first part has support to implement
* primary plane support on top of the normal CRTC configuration interface.
* Since the legacy ->set_config interface ties the primary plane together with
* the CRTC state this does not allow userspace to disable the primary plane
* itself. To avoid too much duplicated code use
* drm_plane_helper_check_update() which can be used to enforce the same
* restrictions as primary planes had thus. The default primary plane only
* expose XRBG8888 and ARGB8888 as valid pixel formats for the attached
* framebuffer.
*
* Drivers are highly recommended to implement proper support for primary
* planes, and newly merged drivers must not rely upon these transitional
* helpers.
*
* The second part also implements transitional helpers which allow drivers to
* gradually switch to the atomic helper infrastructure for plane updates. Once
* that switch is complete drivers shouldn't use these any longer, instead using
* the proper legacy implementations for update and disable plane hooks provided
* by the atomic helpers.
*
* Again drivers are strongly urged to switch to the new interfaces.
*/
/*
* This is the minimal list of formats that seem to be safe for modeset use
* with all current DRM drivers. Most hardware can actually support more
@ -127,6 +155,11 @@ int drm_plane_helper_check_update(struct drm_plane *plane,
return -ERANGE;
}
if (!fb) {
*visible = false;
return 0;
}
*visible = drm_rect_clip_scaled(src, dest, clip, hscale, vscale);
if (!*visible)
/*
@ -369,3 +402,171 @@ int drm_crtc_init(struct drm_device *dev, struct drm_crtc *crtc,
return drm_crtc_init_with_planes(dev, crtc, primary, NULL, funcs);
}
EXPORT_SYMBOL(drm_crtc_init);
int drm_plane_helper_commit(struct drm_plane *plane,
struct drm_plane_state *plane_state,
struct drm_framebuffer *old_fb)
{
struct drm_plane_helper_funcs *plane_funcs;
struct drm_crtc *crtc[2];
struct drm_crtc_helper_funcs *crtc_funcs[2];
int i, ret = 0;
plane_funcs = plane->helper_private;
/* Since this is a transitional helper we can't assume that plane->state
* is always valid. Hence we need to use plane->crtc instead of
* plane->state->crtc as the old crtc. */
crtc[0] = plane->crtc;
crtc[1] = crtc[0] != plane_state->crtc ? plane_state->crtc : NULL;
for (i = 0; i < 2; i++)
crtc_funcs[i] = crtc[i] ? crtc[i]->helper_private : NULL;
if (plane_funcs->atomic_check) {
ret = plane_funcs->atomic_check(plane, plane_state);
if (ret)
goto out;
}
if (plane_funcs->prepare_fb && plane_state->fb) {
ret = plane_funcs->prepare_fb(plane, plane_state->fb);
if (ret)
goto out;
}
/* Point of no return, commit sw state. */
swap(plane->state, plane_state);
for (i = 0; i < 2; i++) {
if (crtc_funcs[i] && crtc_funcs[i]->atomic_begin)
crtc_funcs[i]->atomic_begin(crtc[i]);
}
plane_funcs->atomic_update(plane, plane_state);
for (i = 0; i < 2; i++) {
if (crtc_funcs[i] && crtc_funcs[i]->atomic_flush)
crtc_funcs[i]->atomic_flush(crtc[i]);
}
for (i = 0; i < 2; i++) {
if (!crtc[i])
continue;
/* There's no other way to figure out whether the crtc is running. */
ret = drm_crtc_vblank_get(crtc[i]);
if (ret == 0) {
drm_crtc_wait_one_vblank(crtc[i]);
drm_crtc_vblank_put(crtc[i]);
}
ret = 0;
}
if (plane_funcs->cleanup_fb && old_fb)
plane_funcs->cleanup_fb(plane, old_fb);
out:
if (plane_state) {
if (plane->funcs->atomic_destroy_state)
plane->funcs->atomic_destroy_state(plane, plane_state);
else
drm_atomic_helper_plane_destroy_state(plane, plane_state);
}
return ret;
}
/**
* drm_plane_helper_update() - Helper for primary plane update
* @plane: plane object to update
* @crtc: owning CRTC of owning plane
* @fb: framebuffer to flip onto plane
* @crtc_x: x offset of primary plane on crtc
* @crtc_y: y offset of primary plane on crtc
* @crtc_w: width of primary plane rectangle on crtc
* @crtc_h: height of primary plane rectangle on crtc
* @src_x: x offset of @fb for panning
* @src_y: y offset of @fb for panning
* @src_w: width of source rectangle in @fb
* @src_h: height of source rectangle in @fb
*
* Provides a default plane update handler using the atomic plane update
* functions. It is fully left to the driver to check plane constraints and
* handle corner-cases like a fully occluded or otherwise invisible plane.
*
* This is useful for piecewise transitioning of a driver to the atomic helpers.
*
* RETURNS:
* Zero on success, error code on failure
*/
int drm_plane_helper_update(struct drm_plane *plane, struct drm_crtc *crtc,
struct drm_framebuffer *fb,
int crtc_x, int crtc_y,
unsigned int crtc_w, unsigned int crtc_h,
uint32_t src_x, uint32_t src_y,
uint32_t src_w, uint32_t src_h)
{
struct drm_plane_state *plane_state;
if (plane->funcs->atomic_duplicate_state)
plane_state = plane->funcs->atomic_duplicate_state(plane);
else if (plane->state)
plane_state = drm_atomic_helper_plane_duplicate_state(plane);
else
plane_state = kzalloc(sizeof(*plane_state), GFP_KERNEL);
if (!plane_state)
return -ENOMEM;
plane_state->crtc = crtc;
drm_atomic_set_fb_for_plane(plane_state, fb);
plane_state->crtc_x = crtc_x;
plane_state->crtc_y = crtc_y;
plane_state->crtc_h = crtc_h;
plane_state->crtc_w = crtc_w;
plane_state->src_x = src_x;
plane_state->src_y = src_y;
plane_state->src_h = src_h;
plane_state->src_w = src_w;
return drm_plane_helper_commit(plane, plane_state, plane->fb);
}
EXPORT_SYMBOL(drm_plane_helper_update);
/**
* drm_plane_helper_disable() - Helper for primary plane disable
* @plane: plane to disable
*
* Provides a default plane disable handler using the atomic plane update
* functions. It is fully left to the driver to check plane constraints and
* handle corner-cases like a fully occluded or otherwise invisible plane.
*
* This is useful for piecewise transitioning of a driver to the atomic helpers.
*
* RETURNS:
* Zero on success, error code on failure
*/
int drm_plane_helper_disable(struct drm_plane *plane)
{
struct drm_plane_state *plane_state;
/* crtc helpers love to call disable functions for already disabled hw
* functions. So cope with that. */
if (!plane->crtc)
return 0;
if (plane->funcs->atomic_duplicate_state)
plane_state = plane->funcs->atomic_duplicate_state(plane);
else if (plane->state)
plane_state = drm_atomic_helper_plane_duplicate_state(plane);
else
plane_state = kzalloc(sizeof(*plane_state), GFP_KERNEL);
if (!plane_state)
return -ENOMEM;
plane_state->crtc = NULL;
drm_atomic_set_fb_for_plane(plane_state, NULL);
return drm_plane_helper_commit(plane, plane_state, plane->fb);
}
EXPORT_SYMBOL(drm_plane_helper_disable);

Просмотреть файл

@ -328,7 +328,7 @@ static const struct dma_buf_ops drm_gem_prime_dmabuf_ops = {
*/
/**
* drm_gem_prime_export - helper library implemention of the export callback
* drm_gem_prime_export - helper library implementation of the export callback
* @dev: drm_device to export from
* @obj: GEM object to export
* @flags: flags like DRM_CLOEXEC
@ -483,7 +483,7 @@ out_unlock:
EXPORT_SYMBOL(drm_gem_prime_handle_to_fd);
/**
* drm_gem_prime_import - helper library implemention of the import callback
* drm_gem_prime_import - helper library implementation of the import callback
* @dev: drm_device to import into
* @dma_buf: dma-buf object to import
*
@ -669,7 +669,7 @@ int drm_prime_fd_to_handle_ioctl(struct drm_device *dev, void *data,
* the driver is responsible for mapping the pages into the
* importers address space for use with dma_buf itself.
*/
struct sg_table *drm_prime_pages_to_sg(struct page **pages, int nr_pages)
struct sg_table *drm_prime_pages_to_sg(struct page **pages, unsigned int nr_pages)
{
struct sg_table *sg = NULL;
int ret;

Просмотреть файл

@ -118,7 +118,8 @@ static int drm_helper_probe_single_connector_modes_merge_bits(struct drm_connect
mode->status = MODE_UNVERIFIED;
if (connector->force) {
if (connector->force == DRM_FORCE_ON)
if (connector->force == DRM_FORCE_ON ||
connector->force == DRM_FORCE_ON_DIGITAL)
connector->status = connector_status_connected;
else
connector->status = connector_status_disconnected;

Просмотреть файл

@ -30,12 +30,17 @@
#include <drm/drm_panel.h>
#include <drm/bridge/ptn3460.h>
#include "exynos_drm_drv.h"
#include "exynos_dp_core.h"
#define ctx_from_connector(c) container_of(c, struct exynos_dp_device, \
connector)
static inline struct exynos_dp_device *
display_to_dp(struct exynos_drm_display *d)
{
return container_of(d, struct exynos_dp_device, display);
}
struct bridge_init {
struct i2c_client *client;
struct device_node *node;
@ -882,7 +887,7 @@ static void exynos_dp_hotplug(struct work_struct *work)
static void exynos_dp_commit(struct exynos_drm_display *display)
{
struct exynos_dp_device *dp = display->ctx;
struct exynos_dp_device *dp = display_to_dp(display);
int ret;
/* Keep the panel disabled while we configure video */
@ -1020,7 +1025,7 @@ static int exynos_drm_attach_lcd_bridge(struct drm_device *dev,
static int exynos_dp_create_connector(struct exynos_drm_display *display,
struct drm_encoder *encoder)
{
struct exynos_dp_device *dp = display->ctx;
struct exynos_dp_device *dp = display_to_dp(display);
struct drm_connector *connector = &dp->connector;
int ret;
@ -1052,33 +1057,19 @@ static int exynos_dp_create_connector(struct exynos_drm_display *display,
static void exynos_dp_phy_init(struct exynos_dp_device *dp)
{
if (dp->phy) {
if (dp->phy)
phy_power_on(dp->phy);
} else if (dp->phy_addr) {
u32 reg;
reg = __raw_readl(dp->phy_addr);
reg |= dp->enable_mask;
__raw_writel(reg, dp->phy_addr);
}
}
static void exynos_dp_phy_exit(struct exynos_dp_device *dp)
{
if (dp->phy) {
if (dp->phy)
phy_power_off(dp->phy);
} else if (dp->phy_addr) {
u32 reg;
reg = __raw_readl(dp->phy_addr);
reg &= ~(dp->enable_mask);
__raw_writel(reg, dp->phy_addr);
}
}
static void exynos_dp_poweron(struct exynos_drm_display *display)
{
struct exynos_dp_device *dp = display->ctx;
struct exynos_dp_device *dp = display_to_dp(display);
if (dp->dpms_mode == DRM_MODE_DPMS_ON)
return;
@ -1099,7 +1090,7 @@ static void exynos_dp_poweron(struct exynos_drm_display *display)
static void exynos_dp_poweroff(struct exynos_drm_display *display)
{
struct exynos_dp_device *dp = display->ctx;
struct exynos_dp_device *dp = display_to_dp(display);
if (dp->dpms_mode != DRM_MODE_DPMS_ON)
return;
@ -1124,7 +1115,7 @@ static void exynos_dp_poweroff(struct exynos_drm_display *display)
static void exynos_dp_dpms(struct exynos_drm_display *display, int mode)
{
struct exynos_dp_device *dp = display->ctx;
struct exynos_dp_device *dp = display_to_dp(display);
switch (mode) {
case DRM_MODE_DPMS_ON:
@ -1147,11 +1138,6 @@ static struct exynos_drm_display_ops exynos_dp_display_ops = {
.commit = exynos_dp_commit,
};
static struct exynos_drm_display exynos_dp_display = {
.type = EXYNOS_DISPLAY_TYPE_LCD,
.ops = &exynos_dp_display_ops,
};
static struct video_info *exynos_dp_dt_parse_pdata(struct device *dev)
{
struct device_node *dp_node = dev->of_node;
@ -1210,44 +1196,6 @@ static struct video_info *exynos_dp_dt_parse_pdata(struct device *dev)
return dp_video_config;
}
static int exynos_dp_dt_parse_phydata(struct exynos_dp_device *dp)
{
struct device_node *dp_phy_node = of_node_get(dp->dev->of_node);
u32 phy_base;
int ret = 0;
dp_phy_node = of_find_node_by_name(dp_phy_node, "dptx-phy");
if (!dp_phy_node) {
dp->phy = devm_phy_get(dp->dev, "dp");
return PTR_ERR_OR_ZERO(dp->phy);
}
if (of_property_read_u32(dp_phy_node, "reg", &phy_base)) {
dev_err(dp->dev, "failed to get reg for dptx-phy\n");
ret = -EINVAL;
goto err;
}
if (of_property_read_u32(dp_phy_node, "samsung,enable-mask",
&dp->enable_mask)) {
dev_err(dp->dev, "failed to get enable-mask for dptx-phy\n");
ret = -EINVAL;
goto err;
}
dp->phy_addr = ioremap(phy_base, SZ_4);
if (!dp->phy_addr) {
dev_err(dp->dev, "failed to ioremap dp-phy\n");
ret = -ENOMEM;
goto err;
}
err:
of_node_put(dp_phy_node);
return ret;
}
static int exynos_dp_dt_parse_panel(struct exynos_dp_device *dp)
{
int ret;
@ -1263,10 +1211,10 @@ static int exynos_dp_dt_parse_panel(struct exynos_dp_device *dp)
static int exynos_dp_bind(struct device *dev, struct device *master, void *data)
{
struct exynos_dp_device *dp = dev_get_drvdata(dev);
struct platform_device *pdev = to_platform_device(dev);
struct drm_device *drm_dev = data;
struct resource *res;
struct exynos_dp_device *dp = exynos_dp_display.ctx;
unsigned int irq_flags;
int ret = 0;
@ -1277,9 +1225,21 @@ static int exynos_dp_bind(struct device *dev, struct device *master, void *data)
if (IS_ERR(dp->video_info))
return PTR_ERR(dp->video_info);
ret = exynos_dp_dt_parse_phydata(dp);
if (ret)
return ret;
dp->phy = devm_phy_get(dp->dev, "dp");
if (IS_ERR(dp->phy)) {
dev_err(dp->dev, "no DP phy configured\n");
ret = PTR_ERR(dp->phy);
if (ret) {
/*
* phy itself is not enabled, so we can move forward
* assigning NULL to phy pointer.
*/
if (ret == -ENOSYS || ret == -ENODEV)
dp->phy = NULL;
else
return ret;
}
}
if (!dp->panel) {
ret = exynos_dp_dt_parse_panel(dp);
@ -1346,17 +1306,15 @@ static int exynos_dp_bind(struct device *dev, struct device *master, void *data)
dp->drm_dev = drm_dev;
platform_set_drvdata(pdev, &exynos_dp_display);
return exynos_drm_create_enc_conn(drm_dev, &exynos_dp_display);
return exynos_drm_create_enc_conn(drm_dev, &dp->display);
}
static void exynos_dp_unbind(struct device *dev, struct device *master,
void *data)
{
struct exynos_drm_display *display = dev_get_drvdata(dev);
struct exynos_dp_device *dp = dev_get_drvdata(dev);
exynos_dp_dpms(display, DRM_MODE_DPMS_OFF);
exynos_dp_dpms(&dp->display, DRM_MODE_DPMS_OFF);
}
static const struct component_ops exynos_dp_ops = {
@ -1371,16 +1329,20 @@ static int exynos_dp_probe(struct platform_device *pdev)
struct exynos_dp_device *dp;
int ret;
ret = exynos_drm_component_add(&pdev->dev, EXYNOS_DEVICE_TYPE_CONNECTOR,
exynos_dp_display.type);
if (ret)
return ret;
dp = devm_kzalloc(&pdev->dev, sizeof(struct exynos_dp_device),
GFP_KERNEL);
if (!dp)
return -ENOMEM;
dp->display.type = EXYNOS_DISPLAY_TYPE_LCD;
dp->display.ops = &exynos_dp_display_ops;
platform_set_drvdata(pdev, dp);
ret = exynos_drm_component_add(&pdev->dev, EXYNOS_DEVICE_TYPE_CONNECTOR,
dp->display.type);
if (ret)
return ret;
panel_node = of_parse_phandle(dev->of_node, "panel", 0);
if (panel_node) {
dp->panel = of_drm_find_panel(panel_node);
@ -1389,8 +1351,6 @@ static int exynos_dp_probe(struct platform_device *pdev)
return -EPROBE_DEFER;
}
exynos_dp_display.ctx = dp;
ret = component_add(&pdev->dev, &exynos_dp_ops);
if (ret)
exynos_drm_component_del(&pdev->dev,
@ -1410,19 +1370,17 @@ static int exynos_dp_remove(struct platform_device *pdev)
#ifdef CONFIG_PM_SLEEP
static int exynos_dp_suspend(struct device *dev)
{
struct platform_device *pdev = to_platform_device(dev);
struct exynos_drm_display *display = platform_get_drvdata(pdev);
struct exynos_dp_device *dp = dev_get_drvdata(dev);
exynos_dp_dpms(display, DRM_MODE_DPMS_OFF);
exynos_dp_dpms(&dp->display, DRM_MODE_DPMS_OFF);
return 0;
}
static int exynos_dp_resume(struct device *dev)
{
struct platform_device *pdev = to_platform_device(dev);
struct exynos_drm_display *display = platform_get_drvdata(pdev);
struct exynos_dp_device *dp = dev_get_drvdata(dev);
exynos_dp_dpms(display, DRM_MODE_DPMS_ON);
exynos_dp_dpms(&dp->display, DRM_MODE_DPMS_ON);
return 0;
}
#endif

Просмотреть файл

@ -17,6 +17,8 @@
#include <drm/drm_dp_helper.h>
#include <drm/exynos_drm.h>
#include "exynos_drm_drv.h"
#define DP_TIMEOUT_LOOP_COUNT 100
#define MAX_CR_LOOP 5
#define MAX_EQ_LOOP 5
@ -145,6 +147,7 @@ struct link_train {
};
struct exynos_dp_device {
struct exynos_drm_display display;
struct device *dev;
struct drm_device *drm_dev;
struct drm_connector connector;
@ -153,8 +156,6 @@ struct exynos_dp_device {
struct clk *clock;
unsigned int irq;
void __iomem *reg_base;
void __iomem *phy_addr;
unsigned int enable_mask;
struct video_info *video_info;
struct link_train link_train;

Просмотреть файл

@ -15,10 +15,7 @@
#ifndef _EXYNOS_DRM_CRTC_H_
#define _EXYNOS_DRM_CRTC_H_
struct drm_device;
struct drm_crtc;
struct exynos_drm_manager;
struct exynos_drm_overlay;
#include "exynos_drm_drv.h"
int exynos_drm_crtc_create(struct exynos_drm_manager *manager);
int exynos_drm_crtc_enable_vblank(struct drm_device *dev, int pipe);

Просмотреть файл

@ -22,6 +22,7 @@
#include "exynos_drm_drv.h"
struct exynos_dpi {
struct exynos_drm_display display;
struct device *dev;
struct device_node *panel_node;
@ -35,6 +36,11 @@ struct exynos_dpi {
#define connector_to_dpi(c) container_of(c, struct exynos_dpi, connector)
static inline struct exynos_dpi *display_to_dpi(struct exynos_drm_display *d)
{
return container_of(d, struct exynos_dpi, display);
}
static enum drm_connector_status
exynos_dpi_detect(struct drm_connector *connector, bool force)
{
@ -100,7 +106,7 @@ static struct drm_connector_helper_funcs exynos_dpi_connector_helper_funcs = {
static int exynos_dpi_create_connector(struct exynos_drm_display *display,
struct drm_encoder *encoder)
{
struct exynos_dpi *ctx = display->ctx;
struct exynos_dpi *ctx = display_to_dpi(display);
struct drm_connector *connector = &ctx->connector;
int ret;
@ -141,7 +147,7 @@ static void exynos_dpi_poweroff(struct exynos_dpi *ctx)
static void exynos_dpi_dpms(struct exynos_drm_display *display, int mode)
{
struct exynos_dpi *ctx = display->ctx;
struct exynos_dpi *ctx = display_to_dpi(display);
switch (mode) {
case DRM_MODE_DPMS_ON:
@ -165,11 +171,6 @@ static struct exynos_drm_display_ops exynos_dpi_display_ops = {
.dpms = exynos_dpi_dpms
};
static struct exynos_drm_display exynos_dpi_display = {
.type = EXYNOS_DISPLAY_TYPE_LCD,
.ops = &exynos_dpi_display_ops,
};
/* of_* functions will be removed after merge of of_graph patches */
static struct device_node *
of_get_child_by_name_reg(struct device_node *parent, const char *name, u32 reg)
@ -299,20 +300,21 @@ struct exynos_drm_display *exynos_dpi_probe(struct device *dev)
struct exynos_dpi *ctx;
int ret;
ret = exynos_drm_component_add(dev,
EXYNOS_DEVICE_TYPE_CONNECTOR,
exynos_dpi_display.type);
if (ret)
return ERR_PTR(ret);
ctx = devm_kzalloc(dev, sizeof(*ctx), GFP_KERNEL);
if (!ctx)
goto err_del_component;
return ERR_PTR(-ENOMEM);
ctx->display.type = EXYNOS_DISPLAY_TYPE_LCD;
ctx->display.ops = &exynos_dpi_display_ops;
ctx->dev = dev;
exynos_dpi_display.ctx = ctx;
ctx->dpms_mode = DRM_MODE_DPMS_OFF;
ret = exynos_drm_component_add(dev,
EXYNOS_DEVICE_TYPE_CONNECTOR,
ctx->display.type);
if (ret)
return ERR_PTR(ret);
ret = exynos_dpi_parse_dt(ctx);
if (ret < 0) {
devm_kfree(dev, ctx);
@ -328,7 +330,7 @@ struct exynos_drm_display *exynos_dpi_probe(struct device *dev)
}
}
return &exynos_dpi_display;
return &ctx->display;
err_del_component:
exynos_drm_component_del(dev, EXYNOS_DEVICE_TYPE_CONNECTOR);
@ -336,16 +338,16 @@ err_del_component:
return NULL;
}
int exynos_dpi_remove(struct device *dev)
int exynos_dpi_remove(struct exynos_drm_display *display)
{
struct exynos_dpi *ctx = exynos_dpi_display.ctx;
struct exynos_dpi *ctx = display_to_dpi(display);
exynos_dpi_dpms(&exynos_dpi_display, DRM_MODE_DPMS_OFF);
exynos_dpi_dpms(&ctx->display, DRM_MODE_DPMS_OFF);
if (ctx->panel)
drm_panel_detach(ctx->panel);
exynos_drm_component_del(dev, EXYNOS_DEVICE_TYPE_CONNECTOR);
exynos_drm_component_del(ctx->dev, EXYNOS_DEVICE_TYPE_CONNECTOR);
return 0;
}

Просмотреть файл

@ -203,8 +203,6 @@ static int exynos_drm_resume(struct drm_device *dev)
}
drm_modeset_unlock_all(dev);
drm_helper_resume_force_mode(dev);
return 0;
}
@ -475,8 +473,6 @@ void exynos_drm_component_del(struct device *dev,
list_del(&cdev->list);
kfree(cdev);
}
break;
}
mutex_unlock(&drm_component_lock);
@ -556,182 +552,68 @@ static const struct component_master_ops exynos_drm_ops = {
.unbind = exynos_drm_unbind,
};
static struct platform_driver *const exynos_drm_kms_drivers[] = {
#ifdef CONFIG_DRM_EXYNOS_FIMD
&fimd_driver,
#endif
#ifdef CONFIG_DRM_EXYNOS_DP
&dp_driver,
#endif
#ifdef CONFIG_DRM_EXYNOS_DSI
&dsi_driver,
#endif
#ifdef CONFIG_DRM_EXYNOS_HDMI
&mixer_driver,
&hdmi_driver,
#endif
};
static struct platform_driver *const exynos_drm_non_kms_drivers[] = {
#ifdef CONFIG_DRM_EXYNOS_G2D
&g2d_driver,
#endif
#ifdef CONFIG_DRM_EXYNOS_FIMC
&fimc_driver,
#endif
#ifdef CONFIG_DRM_EXYNOS_ROTATOR
&rotator_driver,
#endif
#ifdef CONFIG_DRM_EXYNOS_GSC
&gsc_driver,
#endif
#ifdef CONFIG_DRM_EXYNOS_IPP
&ipp_driver,
#endif
};
static int exynos_drm_platform_probe(struct platform_device *pdev)
{
struct component_match *match;
int ret;
pdev->dev.coherent_dma_mask = DMA_BIT_MASK(32);
exynos_drm_driver.num_ioctls = ARRAY_SIZE(exynos_ioctls);
#ifdef CONFIG_DRM_EXYNOS_FIMD
ret = platform_driver_register(&fimd_driver);
if (ret < 0)
return ret;
#endif
#ifdef CONFIG_DRM_EXYNOS_DP
ret = platform_driver_register(&dp_driver);
if (ret < 0)
goto err_unregister_fimd_drv;
#endif
#ifdef CONFIG_DRM_EXYNOS_DSI
ret = platform_driver_register(&dsi_driver);
if (ret < 0)
goto err_unregister_dp_drv;
#endif
#ifdef CONFIG_DRM_EXYNOS_HDMI
ret = platform_driver_register(&mixer_driver);
if (ret < 0)
goto err_unregister_dsi_drv;
ret = platform_driver_register(&hdmi_driver);
if (ret < 0)
goto err_unregister_mixer_drv;
#endif
match = exynos_drm_match_add(&pdev->dev);
if (IS_ERR(match)) {
ret = PTR_ERR(match);
goto err_unregister_hdmi_drv;
return PTR_ERR(match);
}
ret = component_master_add_with_match(&pdev->dev, &exynos_drm_ops,
match);
if (ret < 0)
goto err_unregister_hdmi_drv;
#ifdef CONFIG_DRM_EXYNOS_G2D
ret = platform_driver_register(&g2d_driver);
if (ret < 0)
goto err_del_component_master;
#endif
#ifdef CONFIG_DRM_EXYNOS_FIMC
ret = platform_driver_register(&fimc_driver);
if (ret < 0)
goto err_unregister_g2d_drv;
#endif
#ifdef CONFIG_DRM_EXYNOS_ROTATOR
ret = platform_driver_register(&rotator_driver);
if (ret < 0)
goto err_unregister_fimc_drv;
#endif
#ifdef CONFIG_DRM_EXYNOS_GSC
ret = platform_driver_register(&gsc_driver);
if (ret < 0)
goto err_unregister_rotator_drv;
#endif
#ifdef CONFIG_DRM_EXYNOS_IPP
ret = platform_driver_register(&ipp_driver);
if (ret < 0)
goto err_unregister_gsc_drv;
ret = exynos_platform_device_ipp_register();
if (ret < 0)
goto err_unregister_ipp_drv;
#endif
return ret;
#ifdef CONFIG_DRM_EXYNOS_IPP
err_unregister_ipp_drv:
platform_driver_unregister(&ipp_driver);
err_unregister_gsc_drv:
#endif
#ifdef CONFIG_DRM_EXYNOS_GSC
platform_driver_unregister(&gsc_driver);
err_unregister_rotator_drv:
#endif
#ifdef CONFIG_DRM_EXYNOS_ROTATOR
platform_driver_unregister(&rotator_driver);
err_unregister_fimc_drv:
#endif
#ifdef CONFIG_DRM_EXYNOS_FIMC
platform_driver_unregister(&fimc_driver);
err_unregister_g2d_drv:
#endif
#ifdef CONFIG_DRM_EXYNOS_G2D
platform_driver_unregister(&g2d_driver);
err_del_component_master:
#endif
component_master_del(&pdev->dev, &exynos_drm_ops);
err_unregister_hdmi_drv:
#ifdef CONFIG_DRM_EXYNOS_HDMI
platform_driver_unregister(&hdmi_driver);
err_unregister_mixer_drv:
platform_driver_unregister(&mixer_driver);
err_unregister_dsi_drv:
#endif
#ifdef CONFIG_DRM_EXYNOS_DSI
platform_driver_unregister(&dsi_driver);
err_unregister_dp_drv:
#endif
#ifdef CONFIG_DRM_EXYNOS_DP
platform_driver_unregister(&dp_driver);
err_unregister_fimd_drv:
#endif
#ifdef CONFIG_DRM_EXYNOS_FIMD
platform_driver_unregister(&fimd_driver);
#endif
return ret;
return component_master_add_with_match(&pdev->dev, &exynos_drm_ops,
match);
}
static int exynos_drm_platform_remove(struct platform_device *pdev)
{
#ifdef CONFIG_DRM_EXYNOS_IPP
exynos_platform_device_ipp_unregister();
platform_driver_unregister(&ipp_driver);
#endif
#ifdef CONFIG_DRM_EXYNOS_GSC
platform_driver_unregister(&gsc_driver);
#endif
#ifdef CONFIG_DRM_EXYNOS_ROTATOR
platform_driver_unregister(&rotator_driver);
#endif
#ifdef CONFIG_DRM_EXYNOS_FIMC
platform_driver_unregister(&fimc_driver);
#endif
#ifdef CONFIG_DRM_EXYNOS_G2D
platform_driver_unregister(&g2d_driver);
#endif
#ifdef CONFIG_DRM_EXYNOS_HDMI
platform_driver_unregister(&mixer_driver);
platform_driver_unregister(&hdmi_driver);
#endif
#ifdef CONFIG_DRM_EXYNOS_FIMD
platform_driver_unregister(&fimd_driver);
#endif
#ifdef CONFIG_DRM_EXYNOS_DSI
platform_driver_unregister(&dsi_driver);
#endif
#ifdef CONFIG_DRM_EXYNOS_DP
platform_driver_unregister(&dp_driver);
#endif
component_master_del(&pdev->dev, &exynos_drm_ops);
return 0;
}
static const char * const strings[] = {
"samsung,exynos3",
"samsung,exynos4",
"samsung,exynos5",
};
static struct platform_driver exynos_drm_platform_driver = {
.probe = exynos_drm_platform_probe,
.remove = exynos_drm_platform_remove,
@ -743,7 +625,25 @@ static struct platform_driver exynos_drm_platform_driver = {
static int exynos_drm_init(void)
{
int ret;
bool is_exynos = false;
int ret, i, j;
/*
* Register device object only in case of Exynos SoC.
*
* Below codes resolves temporarily infinite loop issue incurred
* by Exynos drm driver when using multi-platform kernel.
* So these codes will be replaced with more generic way later.
*/
for (i = 0; i < ARRAY_SIZE(strings); i++) {
if (of_machine_is_compatible(strings[i])) {
is_exynos = true;
break;
}
}
if (!is_exynos)
return -ENODEV;
/*
* Register device object only in case of Exynos SoC.
@ -762,24 +662,50 @@ static int exynos_drm_init(void)
if (IS_ERR(exynos_drm_pdev))
return PTR_ERR(exynos_drm_pdev);
#ifdef CONFIG_DRM_EXYNOS_VIDI
ret = exynos_drm_probe_vidi();
if (ret < 0)
goto err_unregister_pd;
for (i = 0; i < ARRAY_SIZE(exynos_drm_kms_drivers); ++i) {
ret = platform_driver_register(exynos_drm_kms_drivers[i]);
if (ret < 0)
goto err_unregister_kms_drivers;
}
for (j = 0; j < ARRAY_SIZE(exynos_drm_non_kms_drivers); ++j) {
ret = platform_driver_register(exynos_drm_non_kms_drivers[j]);
if (ret < 0)
goto err_unregister_non_kms_drivers;
}
#ifdef CONFIG_DRM_EXYNOS_IPP
ret = exynos_platform_device_ipp_register();
if (ret < 0)
goto err_unregister_non_kms_drivers;
#endif
ret = platform_driver_register(&exynos_drm_platform_driver);
if (ret)
goto err_remove_vidi;
goto err_unregister_resources;
return 0;
err_remove_vidi:
#ifdef CONFIG_DRM_EXYNOS_VIDI
err_unregister_resources:
#ifdef CONFIG_DRM_EXYNOS_IPP
exynos_platform_device_ipp_unregister();
#endif
err_unregister_non_kms_drivers:
while (--j >= 0)
platform_driver_unregister(exynos_drm_non_kms_drivers[j]);
err_unregister_kms_drivers:
while (--i >= 0)
platform_driver_unregister(exynos_drm_kms_drivers[i]);
exynos_drm_remove_vidi();
err_unregister_pd:
#endif
platform_device_unregister(exynos_drm_pdev);
return ret;
@ -787,10 +713,22 @@ err_unregister_pd:
static void exynos_drm_exit(void)
{
platform_driver_unregister(&exynos_drm_platform_driver);
#ifdef CONFIG_DRM_EXYNOS_VIDI
exynos_drm_remove_vidi();
int i;
#ifdef CONFIG_DRM_EXYNOS_IPP
exynos_platform_device_ipp_unregister();
#endif
for (i = ARRAY_SIZE(exynos_drm_non_kms_drivers) - 1; i >= 0; --i)
platform_driver_unregister(exynos_drm_non_kms_drivers[i]);
for (i = ARRAY_SIZE(exynos_drm_kms_drivers) - 1; i >= 0; --i)
platform_driver_unregister(exynos_drm_kms_drivers[i]);
platform_driver_unregister(&exynos_drm_platform_driver);
exynos_drm_remove_vidi();
platform_device_unregister(exynos_drm_pdev);
}

Просмотреть файл

@ -15,6 +15,7 @@
#ifndef _EXYNOS_DRM_DRV_H_
#define _EXYNOS_DRM_DRV_H_
#include <drm/drmP.h>
#include <linux/module.h>
#define MAX_CRTC 3
@ -22,24 +23,6 @@
#define MAX_FB_BUFFER 4
#define DEFAULT_ZPOS -1
#define _wait_for(COND, MS) ({ \
unsigned long timeout__ = jiffies + msecs_to_jiffies(MS); \
int ret__ = 0; \
while (!(COND)) { \
if (time_after(jiffies, timeout__)) { \
ret__ = -ETIMEDOUT; \
break; \
} \
} \
ret__; \
})
#define wait_for(COND, MS) _wait_for(COND, MS)
struct drm_device;
struct exynos_drm_overlay;
struct drm_connector;
/* This enumerates device type. */
enum exynos_drm_device_type {
EXYNOS_DEVICE_TYPE_NONE,
@ -83,10 +66,10 @@ enum exynos_drm_output_type {
* @dma_addr: array of bus(accessed by dma) address to the memory region
* allocated for a overlay.
* @zpos: order of overlay layer(z position).
* @default_win: a window to be enabled.
* @color_key: color key on or off.
* @index_color: if using color key feature then this value would be used
* as index color.
* @default_win: a window to be enabled.
* @color_key: color key on or off.
* @local_path: in case of lcd type, local path mode on or off.
* @transparency: transparency on or off.
* @activated: activated or not.
@ -114,19 +97,20 @@ struct exynos_drm_overlay {
uint32_t pixel_format;
dma_addr_t dma_addr[MAX_FB_BUFFER];
int zpos;
bool default_win;
bool color_key;
unsigned int index_color;
bool local_path;
bool transparency;
bool activated;
bool default_win:1;
bool color_key:1;
bool local_path:1;
bool transparency:1;
bool activated:1;
};
/*
* Exynos DRM Display Structure.
* - this structure is common to analog tv, digital tv and lcd panel.
*
* @create_connector: initialize and register a new connector
* @remove: cleans up the display for removal
* @mode_fixup: fix mode data comparing to hw specific display mode.
* @mode_set: convert drm_display_mode to hw specific display mode and
@ -168,7 +152,6 @@ struct exynos_drm_display {
struct drm_encoder *encoder;
struct drm_connector *connector;
struct exynos_drm_display_ops *ops;
void *ctx;
};
/*
@ -227,7 +210,6 @@ struct exynos_drm_manager {
struct drm_crtc *crtc;
int pipe;
struct exynos_drm_manager_ops *ops;
void *ctx;
};
struct exynos_drm_g2d_private {
@ -279,8 +261,6 @@ struct exynos_drm_private {
* @dev: pointer to device object for subdrv device driver.
* @drm_dev: pointer to drm_device and this pointer would be set
* when sub driver calls exynos_drm_subdrv_register().
* @manager: subdrv has its own manager to control a hardware appropriately
* and we can access a hardware drawing on this manager.
* @probe: this callback would be called by exynos drm driver after
* subdrv is registered to it.
* @remove: this callback is used to release resources created
@ -312,45 +292,34 @@ int exynos_drm_device_subdrv_remove(struct drm_device *dev);
int exynos_drm_subdrv_open(struct drm_device *dev, struct drm_file *file);
void exynos_drm_subdrv_close(struct drm_device *dev, struct drm_file *file);
/*
* this function registers exynos drm hdmi platform device. It ensures only one
* instance of the device is created.
*/
int exynos_platform_device_hdmi_register(void);
/*
* this function unregisters exynos drm hdmi platform device if it exists.
*/
void exynos_platform_device_hdmi_unregister(void);
/*
* this function registers exynos drm ipp platform device.
*/
#ifdef CONFIG_DRM_EXYNOS_IPP
int exynos_platform_device_ipp_register(void);
/*
* this function unregisters exynos drm ipp platform device if it exists.
*/
void exynos_platform_device_ipp_unregister(void);
#else
static inline int exynos_platform_device_ipp_register(void) { return 0; }
static inline void exynos_platform_device_ipp_unregister(void) {}
#endif
#ifdef CONFIG_DRM_EXYNOS_DPI
struct exynos_drm_display * exynos_dpi_probe(struct device *dev);
int exynos_dpi_remove(struct device *dev);
int exynos_dpi_remove(struct exynos_drm_display *display);
#else
static inline struct exynos_drm_display *
exynos_dpi_probe(struct device *dev) { return NULL; }
static inline int exynos_dpi_remove(struct device *dev) { return 0; }
static inline int exynos_dpi_remove(struct exynos_drm_display *display)
{
return 0;
}
#endif
/*
* this function registers exynos drm vidi platform device/driver.
*/
#ifdef CONFIG_DRM_EXYNOS_VIDI
int exynos_drm_probe_vidi(void);
/*
* this function unregister exynos drm vidi platform device/driver.
*/
void exynos_drm_remove_vidi(void);
#else
static inline int exynos_drm_probe_vidi(void) { return 0; }
static inline void exynos_drm_remove_vidi(void) {}
#endif
/* This function creates a encoder and a connector, and initializes them. */
int exynos_drm_create_enc_conn(struct drm_device *dev,

Просмотреть файл

@ -268,9 +268,9 @@ struct exynos_dsi_driver_data {
};
struct exynos_dsi {
struct exynos_drm_display display;
struct mipi_dsi_host dsi_host;
struct drm_connector connector;
struct drm_encoder *encoder;
struct device_node *panel_node;
struct drm_panel *panel;
struct device *dev;
@ -304,6 +304,11 @@ struct exynos_dsi {
#define host_to_dsi(host) container_of(host, struct exynos_dsi, dsi_host)
#define connector_to_dsi(c) container_of(c, struct exynos_dsi, connector)
static inline struct exynos_dsi *display_to_dsi(struct exynos_drm_display *d)
{
return container_of(d, struct exynos_dsi, display);
}
static struct exynos_dsi_driver_data exynos3_dsi_driver_data = {
.plltmr_reg = 0x50,
.has_freqband = 1,
@ -316,6 +321,11 @@ static struct exynos_dsi_driver_data exynos4_dsi_driver_data = {
.has_clklane_stop = 1,
};
static struct exynos_dsi_driver_data exynos4415_dsi_driver_data = {
.plltmr_reg = 0x58,
.has_clklane_stop = 1,
};
static struct exynos_dsi_driver_data exynos5_dsi_driver_data = {
.plltmr_reg = 0x58,
};
@ -325,6 +335,8 @@ static struct of_device_id exynos_dsi_of_match[] = {
.data = &exynos3_dsi_driver_data },
{ .compatible = "samsung,exynos4210-mipi-dsi",
.data = &exynos4_dsi_driver_data },
{ .compatible = "samsung,exynos4415-mipi-dsi",
.data = &exynos4415_dsi_driver_data },
{ .compatible = "samsung,exynos5410-mipi-dsi",
.data = &exynos5_dsi_driver_data },
{ }
@ -1104,7 +1116,7 @@ static irqreturn_t exynos_dsi_irq(int irq, void *dev_id)
static irqreturn_t exynos_dsi_te_irq_handler(int irq, void *dev_id)
{
struct exynos_dsi *dsi = (struct exynos_dsi *)dev_id;
struct drm_encoder *encoder = dsi->encoder;
struct drm_encoder *encoder = dsi->display.encoder;
if (dsi->state & DSIM_STATE_ENABLED)
exynos_drm_crtc_te_handler(encoder->crtc);
@ -1143,6 +1155,7 @@ static int exynos_dsi_init(struct exynos_dsi *dsi)
static int exynos_dsi_register_te_irq(struct exynos_dsi *dsi)
{
int ret;
int te_gpio_irq;
dsi->te_gpio = of_get_named_gpio(dsi->panel_node, "te-gpios", 0);
if (!gpio_is_valid(dsi->te_gpio)) {
@ -1157,14 +1170,10 @@ static int exynos_dsi_register_te_irq(struct exynos_dsi *dsi)
goto out;
}
/*
* This TE GPIO IRQ should not be set to IRQ_NOAUTOEN, because panel
* calls drm_panel_init() first then calls mipi_dsi_attach() in probe().
* It means that te_gpio is invalid when exynos_dsi_enable_irq() is
* called by drm_panel_init() before panel is attached.
*/
ret = request_threaded_irq(gpio_to_irq(dsi->te_gpio),
exynos_dsi_te_irq_handler, NULL,
te_gpio_irq = gpio_to_irq(dsi->te_gpio);
irq_set_status_flags(te_gpio_irq, IRQ_NOAUTOEN);
ret = request_threaded_irq(te_gpio_irq, exynos_dsi_te_irq_handler, NULL,
IRQF_TRIGGER_RISING, "TE", dsi);
if (ret) {
dev_err(dsi->dev, "request interrupt failed with %d\n", ret);
@ -1195,9 +1204,6 @@ static int exynos_dsi_host_attach(struct mipi_dsi_host *host,
dsi->mode_flags = device->mode_flags;
dsi->panel_node = device->dev.of_node;
if (dsi->connector.dev)
drm_helper_hpd_irq_event(dsi->connector.dev);
/*
* This is a temporary solution and should be made by more generic way.
*
@ -1211,6 +1217,9 @@ static int exynos_dsi_host_attach(struct mipi_dsi_host *host,
return ret;
}
if (dsi->connector.dev)
drm_helper_hpd_irq_event(dsi->connector.dev);
return 0;
}
@ -1236,7 +1245,7 @@ static bool exynos_dsi_is_short_dsi_type(u8 type)
}
static ssize_t exynos_dsi_host_transfer(struct mipi_dsi_host *host,
struct mipi_dsi_msg *msg)
const struct mipi_dsi_msg *msg)
{
struct exynos_dsi *dsi = host_to_dsi(host);
struct exynos_dsi_transfer xfer;
@ -1369,16 +1378,17 @@ static int exynos_dsi_enable(struct exynos_dsi *dsi)
exynos_dsi_set_display_mode(dsi);
exynos_dsi_set_display_enable(dsi, true);
dsi->state |= DSIM_STATE_ENABLED;
ret = drm_panel_enable(dsi->panel);
if (ret < 0) {
dsi->state &= ~DSIM_STATE_ENABLED;
exynos_dsi_set_display_enable(dsi, false);
drm_panel_unprepare(dsi->panel);
exynos_dsi_poweroff(dsi);
return ret;
}
dsi->state |= DSIM_STATE_ENABLED;
return 0;
}
@ -1397,7 +1407,7 @@ static void exynos_dsi_disable(struct exynos_dsi *dsi)
static void exynos_dsi_dpms(struct exynos_drm_display *display, int mode)
{
struct exynos_dsi *dsi = display->ctx;
struct exynos_dsi *dsi = display_to_dsi(display);
if (dsi->panel) {
switch (mode) {
@ -1474,7 +1484,7 @@ exynos_dsi_best_encoder(struct drm_connector *connector)
{
struct exynos_dsi *dsi = connector_to_dsi(connector);
return dsi->encoder;
return dsi->display.encoder;
}
static struct drm_connector_helper_funcs exynos_dsi_connector_helper_funcs = {
@ -1486,12 +1496,10 @@ static struct drm_connector_helper_funcs exynos_dsi_connector_helper_funcs = {
static int exynos_dsi_create_connector(struct exynos_drm_display *display,
struct drm_encoder *encoder)
{
struct exynos_dsi *dsi = display->ctx;
struct exynos_dsi *dsi = display_to_dsi(display);
struct drm_connector *connector = &dsi->connector;
int ret;
dsi->encoder = encoder;
connector->polled = DRM_CONNECTOR_POLL_HPD;
ret = drm_connector_init(encoder->dev, connector,
@ -1512,7 +1520,7 @@ static int exynos_dsi_create_connector(struct exynos_drm_display *display,
static void exynos_dsi_mode_set(struct exynos_drm_display *display,
struct drm_display_mode *mode)
{
struct exynos_dsi *dsi = display->ctx;
struct exynos_dsi *dsi = display_to_dsi(display);
struct videomode *vm = &dsi->vm;
vm->hactive = mode->hdisplay;
@ -1531,10 +1539,6 @@ static struct exynos_drm_display_ops exynos_dsi_display_ops = {
.dpms = exynos_dsi_dpms
};
static struct exynos_drm_display exynos_dsi_display = {
.type = EXYNOS_DISPLAY_TYPE_LCD,
.ops = &exynos_dsi_display_ops,
};
MODULE_DEVICE_TABLE(of, exynos_dsi_of_match);
/* of_* functions will be removed after merge of of_graph patches */
@ -1640,28 +1644,28 @@ end:
static int exynos_dsi_bind(struct device *dev, struct device *master,
void *data)
{
struct exynos_drm_display *display = dev_get_drvdata(dev);
struct exynos_dsi *dsi = display_to_dsi(display);
struct drm_device *drm_dev = data;
struct exynos_dsi *dsi;
int ret;
ret = exynos_drm_create_enc_conn(drm_dev, &exynos_dsi_display);
ret = exynos_drm_create_enc_conn(drm_dev, display);
if (ret) {
DRM_ERROR("Encoder create [%d] failed with %d\n",
exynos_dsi_display.type, ret);
display->type, ret);
return ret;
}
dsi = exynos_dsi_display.ctx;
return mipi_dsi_host_register(&dsi->dsi_host);
}
static void exynos_dsi_unbind(struct device *dev, struct device *master,
void *data)
{
struct exynos_dsi *dsi = exynos_dsi_display.ctx;
struct exynos_drm_display *display = dev_get_drvdata(dev);
struct exynos_dsi *dsi = display_to_dsi(display);
exynos_dsi_dpms(&exynos_dsi_display, DRM_MODE_DPMS_OFF);
exynos_dsi_dpms(display, DRM_MODE_DPMS_OFF);
mipi_dsi_host_unregister(&dsi->dsi_host);
}
@ -1673,22 +1677,23 @@ static const struct component_ops exynos_dsi_component_ops = {
static int exynos_dsi_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct resource *res;
struct exynos_dsi *dsi;
int ret;
ret = exynos_drm_component_add(&pdev->dev, EXYNOS_DEVICE_TYPE_CONNECTOR,
exynos_dsi_display.type);
dsi = devm_kzalloc(dev, sizeof(*dsi), GFP_KERNEL);
if (!dsi)
return -ENOMEM;
dsi->display.type = EXYNOS_DISPLAY_TYPE_LCD;
dsi->display.ops = &exynos_dsi_display_ops;
ret = exynos_drm_component_add(dev, EXYNOS_DEVICE_TYPE_CONNECTOR,
dsi->display.type);
if (ret)
return ret;
dsi = devm_kzalloc(&pdev->dev, sizeof(*dsi), GFP_KERNEL);
if (!dsi) {
dev_err(&pdev->dev, "failed to allocate dsi object.\n");
ret = -ENOMEM;
goto err_del_component;
}
/* To be checked as invalid one */
dsi->te_gpio = -ENOENT;
@ -1697,9 +1702,9 @@ static int exynos_dsi_probe(struct platform_device *pdev)
INIT_LIST_HEAD(&dsi->transfer_list);
dsi->dsi_host.ops = &exynos_dsi_ops;
dsi->dsi_host.dev = &pdev->dev;
dsi->dsi_host.dev = dev;
dsi->dev = &pdev->dev;
dsi->dev = dev;
dsi->driver_data = exynos_dsi_get_driver_data(pdev);
ret = exynos_dsi_parse_dt(dsi);
@ -1708,70 +1713,68 @@ static int exynos_dsi_probe(struct platform_device *pdev)
dsi->supplies[0].supply = "vddcore";
dsi->supplies[1].supply = "vddio";
ret = devm_regulator_bulk_get(&pdev->dev, ARRAY_SIZE(dsi->supplies),
ret = devm_regulator_bulk_get(dev, ARRAY_SIZE(dsi->supplies),
dsi->supplies);
if (ret) {
dev_info(&pdev->dev, "failed to get regulators: %d\n", ret);
dev_info(dev, "failed to get regulators: %d\n", ret);
return -EPROBE_DEFER;
}
dsi->pll_clk = devm_clk_get(&pdev->dev, "pll_clk");
dsi->pll_clk = devm_clk_get(dev, "pll_clk");
if (IS_ERR(dsi->pll_clk)) {
dev_info(&pdev->dev, "failed to get dsi pll input clock\n");
dev_info(dev, "failed to get dsi pll input clock\n");
ret = PTR_ERR(dsi->pll_clk);
goto err_del_component;
}
dsi->bus_clk = devm_clk_get(&pdev->dev, "bus_clk");
dsi->bus_clk = devm_clk_get(dev, "bus_clk");
if (IS_ERR(dsi->bus_clk)) {
dev_info(&pdev->dev, "failed to get dsi bus clock\n");
dev_info(dev, "failed to get dsi bus clock\n");
ret = PTR_ERR(dsi->bus_clk);
goto err_del_component;
}
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
dsi->reg_base = devm_ioremap_resource(&pdev->dev, res);
dsi->reg_base = devm_ioremap_resource(dev, res);
if (IS_ERR(dsi->reg_base)) {
dev_err(&pdev->dev, "failed to remap io region\n");
dev_err(dev, "failed to remap io region\n");
ret = PTR_ERR(dsi->reg_base);
goto err_del_component;
}
dsi->phy = devm_phy_get(&pdev->dev, "dsim");
dsi->phy = devm_phy_get(dev, "dsim");
if (IS_ERR(dsi->phy)) {
dev_info(&pdev->dev, "failed to get dsim phy\n");
dev_info(dev, "failed to get dsim phy\n");
ret = PTR_ERR(dsi->phy);
goto err_del_component;
}
dsi->irq = platform_get_irq(pdev, 0);
if (dsi->irq < 0) {
dev_err(&pdev->dev, "failed to request dsi irq resource\n");
dev_err(dev, "failed to request dsi irq resource\n");
ret = dsi->irq;
goto err_del_component;
}
irq_set_status_flags(dsi->irq, IRQ_NOAUTOEN);
ret = devm_request_threaded_irq(&pdev->dev, dsi->irq, NULL,
ret = devm_request_threaded_irq(dev, dsi->irq, NULL,
exynos_dsi_irq, IRQF_ONESHOT,
dev_name(&pdev->dev), dsi);
dev_name(dev), dsi);
if (ret) {
dev_err(&pdev->dev, "failed to request dsi irq\n");
dev_err(dev, "failed to request dsi irq\n");
goto err_del_component;
}
exynos_dsi_display.ctx = dsi;
platform_set_drvdata(pdev, &dsi->display);
platform_set_drvdata(pdev, &exynos_dsi_display);
ret = component_add(&pdev->dev, &exynos_dsi_component_ops);
ret = component_add(dev, &exynos_dsi_component_ops);
if (ret)
goto err_del_component;
return ret;
err_del_component:
exynos_drm_component_del(&pdev->dev, EXYNOS_DEVICE_TYPE_CONNECTOR);
exynos_drm_component_del(dev, EXYNOS_DEVICE_TYPE_CONNECTOR);
return ret;
}

Просмотреть файл

@ -14,8 +14,6 @@
#ifndef _EXYNOS_DRM_ENCODER_H_
#define _EXYNOS_DRM_ENCODER_H_
struct exynos_drm_manager;
void exynos_drm_encoder_setup(struct drm_device *dev);
struct drm_encoder *exynos_drm_encoder_create(struct drm_device *dev,
struct exynos_drm_display *mgr,

Просмотреть файл

@ -84,8 +84,6 @@
/* FIMD has totally five hardware windows. */
#define WINDOWS_NR 5
#define get_fimd_manager(mgr) platform_get_drvdata(to_platform_device(dev))
struct fimd_driver_data {
unsigned int timing_base;
unsigned int lcdblk_offset;
@ -96,6 +94,7 @@ struct fimd_driver_data {
unsigned int has_clksel:1;
unsigned int has_limited_fmt:1;
unsigned int has_vidoutcon:1;
unsigned int has_vtsel:1;
};
static struct fimd_driver_data s3c64xx_fimd_driver_data = {
@ -118,6 +117,17 @@ static struct fimd_driver_data exynos4_fimd_driver_data = {
.lcdblk_vt_shift = 10,
.lcdblk_bypass_shift = 1,
.has_shadowcon = 1,
.has_vtsel = 1,
};
static struct fimd_driver_data exynos4415_fimd_driver_data = {
.timing_base = 0x20000,
.lcdblk_offset = 0x210,
.lcdblk_vt_shift = 10,
.lcdblk_bypass_shift = 1,
.has_shadowcon = 1,
.has_vidoutcon = 1,
.has_vtsel = 1,
};
static struct fimd_driver_data exynos5_fimd_driver_data = {
@ -127,6 +137,7 @@ static struct fimd_driver_data exynos5_fimd_driver_data = {
.lcdblk_bypass_shift = 15,
.has_shadowcon = 1,
.has_vidoutcon = 1,
.has_vtsel = 1,
};
struct fimd_win_data {
@ -146,6 +157,7 @@ struct fimd_win_data {
};
struct fimd_context {
struct exynos_drm_manager manager;
struct device *dev;
struct drm_device *drm_dev;
struct clk *bus_clk;
@ -173,6 +185,11 @@ struct fimd_context {
struct exynos_drm_display *display;
};
static inline struct fimd_context *mgr_to_fimd(struct exynos_drm_manager *mgr)
{
return container_of(mgr, struct fimd_context, manager);
}
static const struct of_device_id fimd_driver_dt_match[] = {
{ .compatible = "samsung,s3c6400-fimd",
.data = &s3c64xx_fimd_driver_data },
@ -180,6 +197,8 @@ static const struct of_device_id fimd_driver_dt_match[] = {
.data = &exynos3_fimd_driver_data },
{ .compatible = "samsung,exynos4210-fimd",
.data = &exynos4_fimd_driver_data },
{ .compatible = "samsung,exynos4415-fimd",
.data = &exynos4415_fimd_driver_data },
{ .compatible = "samsung,exynos5250-fimd",
.data = &exynos5_fimd_driver_data },
{},
@ -197,7 +216,7 @@ static inline struct fimd_driver_data *drm_fimd_get_driver_data(
static void fimd_wait_for_vblank(struct exynos_drm_manager *mgr)
{
struct fimd_context *ctx = mgr->ctx;
struct fimd_context *ctx = mgr_to_fimd(mgr);
if (ctx->suspended)
return;
@ -214,9 +233,35 @@ static void fimd_wait_for_vblank(struct exynos_drm_manager *mgr)
DRM_DEBUG_KMS("vblank wait timed out.\n");
}
static void fimd_enable_video_output(struct fimd_context *ctx, int win,
bool enable)
{
u32 val = readl(ctx->regs + WINCON(win));
if (enable)
val |= WINCONx_ENWIN;
else
val &= ~WINCONx_ENWIN;
writel(val, ctx->regs + WINCON(win));
}
static void fimd_enable_shadow_channel_path(struct fimd_context *ctx, int win,
bool enable)
{
u32 val = readl(ctx->regs + SHADOWCON);
if (enable)
val |= SHADOWCON_CHx_ENABLE(win);
else
val &= ~SHADOWCON_CHx_ENABLE(win);
writel(val, ctx->regs + SHADOWCON);
}
static void fimd_clear_channel(struct exynos_drm_manager *mgr)
{
struct fimd_context *ctx = mgr->ctx;
struct fimd_context *ctx = mgr_to_fimd(mgr);
int win, ch_enabled = 0;
DRM_DEBUG_KMS("%s\n", __FILE__);
@ -226,16 +271,12 @@ static void fimd_clear_channel(struct exynos_drm_manager *mgr)
u32 val = readl(ctx->regs + WINCON(win));
if (val & WINCONx_ENWIN) {
/* wincon */
val &= ~WINCONx_ENWIN;
writel(val, ctx->regs + WINCON(win));
fimd_enable_video_output(ctx, win, false);
if (ctx->driver_data->has_shadowcon)
fimd_enable_shadow_channel_path(ctx, win,
false);
/* unprotect windows */
if (ctx->driver_data->has_shadowcon) {
val = readl(ctx->regs + SHADOWCON);
val &= ~SHADOWCON_CHx_ENABLE(win);
writel(val, ctx->regs + SHADOWCON);
}
ch_enabled = 1;
}
}
@ -253,7 +294,7 @@ static void fimd_clear_channel(struct exynos_drm_manager *mgr)
static int fimd_mgr_initialize(struct exynos_drm_manager *mgr,
struct drm_device *drm_dev)
{
struct fimd_context *ctx = mgr->ctx;
struct fimd_context *ctx = mgr_to_fimd(mgr);
struct exynos_drm_private *priv;
priv = drm_dev->dev_private;
@ -275,7 +316,7 @@ static int fimd_mgr_initialize(struct exynos_drm_manager *mgr,
static void fimd_mgr_remove(struct exynos_drm_manager *mgr)
{
struct fimd_context *ctx = mgr->ctx;
struct fimd_context *ctx = mgr_to_fimd(mgr);
/* detach this sub driver from iommu mapping if supported. */
if (is_drm_iommu_supported(ctx->drm_dev))
@ -315,14 +356,14 @@ static bool fimd_mode_fixup(struct exynos_drm_manager *mgr,
static void fimd_mode_set(struct exynos_drm_manager *mgr,
const struct drm_display_mode *in_mode)
{
struct fimd_context *ctx = mgr->ctx;
struct fimd_context *ctx = mgr_to_fimd(mgr);
drm_mode_copy(&ctx->mode, in_mode);
}
static void fimd_commit(struct exynos_drm_manager *mgr)
{
struct fimd_context *ctx = mgr->ctx;
struct fimd_context *ctx = mgr_to_fimd(mgr);
struct drm_display_mode *mode = &ctx->mode;
struct fimd_driver_data *driver_data = ctx->driver_data;
void *timing_base = ctx->regs + driver_data->timing_base;
@ -343,7 +384,8 @@ static void fimd_commit(struct exynos_drm_manager *mgr)
writel(0, timing_base + I80IFCONFBx(0));
/* set video type selection to I80 interface */
if (ctx->sysreg && regmap_update_bits(ctx->sysreg,
if (driver_data->has_vtsel && ctx->sysreg &&
regmap_update_bits(ctx->sysreg,
driver_data->lcdblk_offset,
0x3 << driver_data->lcdblk_vt_shift,
0x1 << driver_data->lcdblk_vt_shift)) {
@ -421,7 +463,7 @@ static void fimd_commit(struct exynos_drm_manager *mgr)
static int fimd_enable_vblank(struct exynos_drm_manager *mgr)
{
struct fimd_context *ctx = mgr->ctx;
struct fimd_context *ctx = mgr_to_fimd(mgr);
u32 val;
if (ctx->suspended)
@ -431,12 +473,19 @@ static int fimd_enable_vblank(struct exynos_drm_manager *mgr)
val = readl(ctx->regs + VIDINTCON0);
val |= VIDINTCON0_INT_ENABLE;
val |= VIDINTCON0_INT_FRAME;
val &= ~VIDINTCON0_FRAMESEL0_MASK;
val |= VIDINTCON0_FRAMESEL0_VSYNC;
val &= ~VIDINTCON0_FRAMESEL1_MASK;
val |= VIDINTCON0_FRAMESEL1_NONE;
if (ctx->i80_if) {
val |= VIDINTCON0_INT_I80IFDONE;
val |= VIDINTCON0_INT_SYSMAINCON;
val &= ~VIDINTCON0_INT_SYSSUBCON;
} else {
val |= VIDINTCON0_INT_FRAME;
val &= ~VIDINTCON0_FRAMESEL0_MASK;
val |= VIDINTCON0_FRAMESEL0_VSYNC;
val &= ~VIDINTCON0_FRAMESEL1_MASK;
val |= VIDINTCON0_FRAMESEL1_NONE;
}
writel(val, ctx->regs + VIDINTCON0);
}
@ -446,7 +495,7 @@ static int fimd_enable_vblank(struct exynos_drm_manager *mgr)
static void fimd_disable_vblank(struct exynos_drm_manager *mgr)
{
struct fimd_context *ctx = mgr->ctx;
struct fimd_context *ctx = mgr_to_fimd(mgr);
u32 val;
if (ctx->suspended)
@ -455,9 +504,15 @@ static void fimd_disable_vblank(struct exynos_drm_manager *mgr)
if (test_and_clear_bit(0, &ctx->irq_flags)) {
val = readl(ctx->regs + VIDINTCON0);
val &= ~VIDINTCON0_INT_FRAME;
val &= ~VIDINTCON0_INT_ENABLE;
if (ctx->i80_if) {
val &= ~VIDINTCON0_INT_I80IFDONE;
val &= ~VIDINTCON0_INT_SYSMAINCON;
val &= ~VIDINTCON0_INT_SYSSUBCON;
} else
val &= ~VIDINTCON0_INT_FRAME;
writel(val, ctx->regs + VIDINTCON0);
}
}
@ -465,7 +520,7 @@ static void fimd_disable_vblank(struct exynos_drm_manager *mgr)
static void fimd_win_mode_set(struct exynos_drm_manager *mgr,
struct exynos_drm_overlay *overlay)
{
struct fimd_context *ctx = mgr->ctx;
struct fimd_context *ctx = mgr_to_fimd(mgr);
struct fimd_win_data *win_data;
int win;
unsigned long offset;
@ -623,7 +678,7 @@ static void fimd_shadow_protect_win(struct fimd_context *ctx,
static void fimd_win_commit(struct exynos_drm_manager *mgr, int zpos)
{
struct fimd_context *ctx = mgr->ctx;
struct fimd_context *ctx = mgr_to_fimd(mgr);
struct fimd_win_data *win_data;
int win = zpos;
unsigned long val, alpha, size;
@ -730,20 +785,14 @@ static void fimd_win_commit(struct exynos_drm_manager *mgr, int zpos)
if (win != 0)
fimd_win_set_colkey(ctx, win);
/* wincon */
val = readl(ctx->regs + WINCON(win));
val |= WINCONx_ENWIN;
writel(val, ctx->regs + WINCON(win));
fimd_enable_video_output(ctx, win, true);
if (ctx->driver_data->has_shadowcon)
fimd_enable_shadow_channel_path(ctx, win, true);
/* Enable DMA channel and unprotect windows */
fimd_shadow_protect_win(ctx, win, false);
if (ctx->driver_data->has_shadowcon) {
val = readl(ctx->regs + SHADOWCON);
val |= SHADOWCON_CHx_ENABLE(win);
writel(val, ctx->regs + SHADOWCON);
}
win_data->enabled = true;
if (ctx->i80_if)
@ -752,10 +801,9 @@ static void fimd_win_commit(struct exynos_drm_manager *mgr, int zpos)
static void fimd_win_disable(struct exynos_drm_manager *mgr, int zpos)
{
struct fimd_context *ctx = mgr->ctx;
struct fimd_context *ctx = mgr_to_fimd(mgr);
struct fimd_win_data *win_data;
int win = zpos;
u32 val;
if (win == DEFAULT_ZPOS)
win = ctx->default_win;
@ -774,18 +822,12 @@ static void fimd_win_disable(struct exynos_drm_manager *mgr, int zpos)
/* protect windows */
fimd_shadow_protect_win(ctx, win, true);
/* wincon */
val = readl(ctx->regs + WINCON(win));
val &= ~WINCONx_ENWIN;
writel(val, ctx->regs + WINCON(win));
fimd_enable_video_output(ctx, win, false);
if (ctx->driver_data->has_shadowcon)
fimd_enable_shadow_channel_path(ctx, win, false);
/* unprotect windows */
if (ctx->driver_data->has_shadowcon) {
val = readl(ctx->regs + SHADOWCON);
val &= ~SHADOWCON_CHx_ENABLE(win);
writel(val, ctx->regs + SHADOWCON);
}
fimd_shadow_protect_win(ctx, win, false);
win_data->enabled = false;
@ -793,7 +835,7 @@ static void fimd_win_disable(struct exynos_drm_manager *mgr, int zpos)
static void fimd_window_suspend(struct exynos_drm_manager *mgr)
{
struct fimd_context *ctx = mgr->ctx;
struct fimd_context *ctx = mgr_to_fimd(mgr);
struct fimd_win_data *win_data;
int i;
@ -803,12 +845,11 @@ static void fimd_window_suspend(struct exynos_drm_manager *mgr)
if (win_data->enabled)
fimd_win_disable(mgr, i);
}
fimd_wait_for_vblank(mgr);
}
static void fimd_window_resume(struct exynos_drm_manager *mgr)
{
struct fimd_context *ctx = mgr->ctx;
struct fimd_context *ctx = mgr_to_fimd(mgr);
struct fimd_win_data *win_data;
int i;
@ -821,7 +862,7 @@ static void fimd_window_resume(struct exynos_drm_manager *mgr)
static void fimd_apply(struct exynos_drm_manager *mgr)
{
struct fimd_context *ctx = mgr->ctx;
struct fimd_context *ctx = mgr_to_fimd(mgr);
struct fimd_win_data *win_data;
int i;
@ -838,7 +879,7 @@ static void fimd_apply(struct exynos_drm_manager *mgr)
static int fimd_poweron(struct exynos_drm_manager *mgr)
{
struct fimd_context *ctx = mgr->ctx;
struct fimd_context *ctx = mgr_to_fimd(mgr);
int ret;
if (!ctx->suspended)
@ -886,7 +927,7 @@ bus_clk_err:
static int fimd_poweroff(struct exynos_drm_manager *mgr)
{
struct fimd_context *ctx = mgr->ctx;
struct fimd_context *ctx = mgr_to_fimd(mgr);
if (ctx->suspended)
return 0;
@ -928,39 +969,41 @@ static void fimd_dpms(struct exynos_drm_manager *mgr, int mode)
static void fimd_trigger(struct device *dev)
{
struct exynos_drm_manager *mgr = get_fimd_manager(dev);
struct fimd_context *ctx = mgr->ctx;
struct fimd_context *ctx = dev_get_drvdata(dev);
struct fimd_driver_data *driver_data = ctx->driver_data;
void *timing_base = ctx->regs + driver_data->timing_base;
u32 reg;
atomic_set(&ctx->triggering, 1);
/*
* Skips triggering if in triggering state, because multiple triggering
* requests can cause panel reset.
*/
if (atomic_read(&ctx->triggering))
return;
reg = readl(ctx->regs + VIDINTCON0);
reg |= (VIDINTCON0_INT_ENABLE | VIDINTCON0_INT_I80IFDONE |
VIDINTCON0_INT_SYSMAINCON);
writel(reg, ctx->regs + VIDINTCON0);
/* Enters triggering mode */
atomic_set(&ctx->triggering, 1);
reg = readl(timing_base + TRIGCON);
reg |= (TRGMODE_I80_RGB_ENABLE_I80 | SWTRGCMD_I80_RGB_ENABLE);
writel(reg, timing_base + TRIGCON);
/*
* Exits triggering mode if vblank is not enabled yet, because when the
* VIDINTCON0 register is not set, it can not exit from triggering mode.
*/
if (!test_bit(0, &ctx->irq_flags))
atomic_set(&ctx->triggering, 0);
}
static void fimd_te_handler(struct exynos_drm_manager *mgr)
{
struct fimd_context *ctx = mgr->ctx;
struct fimd_context *ctx = mgr_to_fimd(mgr);
/* Checks the crtc is detached already from encoder */
if (ctx->pipe < 0 || !ctx->drm_dev)
return;
/*
* Skips to trigger if in triggering state, because multiple triggering
* requests can cause panel reset.
*/
if (atomic_read(&ctx->triggering))
return;
/*
* If there is a page flip request, triggers and handles the page flip
* event so that current fb can be updated into panel GRAM.
@ -972,10 +1015,10 @@ static void fimd_te_handler(struct exynos_drm_manager *mgr)
if (atomic_read(&ctx->wait_vsync_event)) {
atomic_set(&ctx->wait_vsync_event, 0);
wake_up(&ctx->wait_vsync_queue);
if (!atomic_read(&ctx->triggering))
drm_handle_vblank(ctx->drm_dev, ctx->pipe);
}
if (test_bit(0, &ctx->irq_flags))
drm_handle_vblank(ctx->drm_dev, ctx->pipe);
}
static struct exynos_drm_manager_ops fimd_manager_ops = {
@ -992,11 +1035,6 @@ static struct exynos_drm_manager_ops fimd_manager_ops = {
.te_handler = fimd_te_handler,
};
static struct exynos_drm_manager fimd_manager = {
.type = EXYNOS_DISPLAY_TYPE_LCD,
.ops = &fimd_manager_ops,
};
static irqreturn_t fimd_irq_handler(int irq, void *dev_id)
{
struct fimd_context *ctx = (struct fimd_context *)dev_id;
@ -1013,16 +1051,10 @@ static irqreturn_t fimd_irq_handler(int irq, void *dev_id)
goto out;
if (ctx->i80_if) {
/* unset I80 frame done interrupt */
val = readl(ctx->regs + VIDINTCON0);
val &= ~(VIDINTCON0_INT_I80IFDONE | VIDINTCON0_INT_SYSMAINCON);
writel(val, ctx->regs + VIDINTCON0);
/* exit triggering mode */
atomic_set(&ctx->triggering, 0);
drm_handle_vblank(ctx->drm_dev, ctx->pipe);
exynos_drm_crtc_finish_pageflip(ctx->drm_dev, ctx->pipe);
/* Exits triggering mode */
atomic_set(&ctx->triggering, 0);
} else {
drm_handle_vblank(ctx->drm_dev, ctx->pipe);
exynos_drm_crtc_finish_pageflip(ctx->drm_dev, ctx->pipe);
@ -1040,11 +1072,11 @@ out:
static int fimd_bind(struct device *dev, struct device *master, void *data)
{
struct fimd_context *ctx = fimd_manager.ctx;
struct fimd_context *ctx = dev_get_drvdata(dev);
struct drm_device *drm_dev = data;
fimd_mgr_initialize(&fimd_manager, drm_dev);
exynos_drm_crtc_create(&fimd_manager);
fimd_mgr_initialize(&ctx->manager, drm_dev);
exynos_drm_crtc_create(&ctx->manager);
if (ctx->display)
exynos_drm_create_enc_conn(drm_dev, ctx->display);
@ -1055,15 +1087,14 @@ static int fimd_bind(struct device *dev, struct device *master, void *data)
static void fimd_unbind(struct device *dev, struct device *master,
void *data)
{
struct exynos_drm_manager *mgr = dev_get_drvdata(dev);
struct fimd_context *ctx = fimd_manager.ctx;
struct fimd_context *ctx = dev_get_drvdata(dev);
fimd_dpms(mgr, DRM_MODE_DPMS_OFF);
fimd_dpms(&ctx->manager, DRM_MODE_DPMS_OFF);
if (ctx->display)
exynos_dpi_remove(dev);
exynos_dpi_remove(ctx->display);
fimd_mgr_remove(mgr);
fimd_mgr_remove(&ctx->manager);
}
static const struct component_ops fimd_component_ops = {
@ -1079,21 +1110,20 @@ static int fimd_probe(struct platform_device *pdev)
struct resource *res;
int ret = -EINVAL;
ret = exynos_drm_component_add(&pdev->dev, EXYNOS_DEVICE_TYPE_CRTC,
fimd_manager.type);
if (ret)
return ret;
if (!dev->of_node) {
ret = -ENODEV;
goto err_del_component;
}
if (!dev->of_node)
return -ENODEV;
ctx = devm_kzalloc(dev, sizeof(*ctx), GFP_KERNEL);
if (!ctx) {
ret = -ENOMEM;
goto err_del_component;
}
if (!ctx)
return -ENOMEM;
ctx->manager.type = EXYNOS_DISPLAY_TYPE_LCD;
ctx->manager.ops = &fimd_manager_ops;
ret = exynos_drm_component_add(dev, EXYNOS_DEVICE_TYPE_CRTC,
ctx->manager.type);
if (ret)
return ret;
ctx->dev = dev;
ctx->suspended = true;
@ -1182,27 +1212,27 @@ static int fimd_probe(struct platform_device *pdev)
init_waitqueue_head(&ctx->wait_vsync_queue);
atomic_set(&ctx->wait_vsync_event, 0);
platform_set_drvdata(pdev, &fimd_manager);
fimd_manager.ctx = ctx;
platform_set_drvdata(pdev, ctx);
ctx->display = exynos_dpi_probe(dev);
if (IS_ERR(ctx->display))
return PTR_ERR(ctx->display);
if (IS_ERR(ctx->display)) {
ret = PTR_ERR(ctx->display);
goto err_del_component;
}
pm_runtime_enable(&pdev->dev);
pm_runtime_enable(dev);
ret = component_add(&pdev->dev, &fimd_component_ops);
ret = component_add(dev, &fimd_component_ops);
if (ret)
goto err_disable_pm_runtime;
return ret;
err_disable_pm_runtime:
pm_runtime_disable(&pdev->dev);
pm_runtime_disable(dev);
err_del_component:
exynos_drm_component_del(&pdev->dev, EXYNOS_DEVICE_TYPE_CRTC);
exynos_drm_component_del(dev, EXYNOS_DEVICE_TYPE_CRTC);
return ret;
}

Просмотреть файл

@ -40,7 +40,6 @@ static inline bool is_drm_iommu_supported(struct drm_device *drm_dev)
#else
struct dma_iommu_mapping;
static inline int drm_create_iommu_mapping(struct drm_device *drm_dev)
{
return 0;

Просмотреть файл

@ -426,18 +426,21 @@ int exynos_drm_ipp_set_property(struct drm_device *drm_dev, void *data,
c_node->start_work = ipp_create_cmd_work();
if (IS_ERR(c_node->start_work)) {
DRM_ERROR("failed to create start work.\n");
ret = PTR_ERR(c_node->start_work);
goto err_remove_id;
}
c_node->stop_work = ipp_create_cmd_work();
if (IS_ERR(c_node->stop_work)) {
DRM_ERROR("failed to create stop work.\n");
ret = PTR_ERR(c_node->stop_work);
goto err_free_start;
}
c_node->event_work = ipp_create_event_work();
if (IS_ERR(c_node->event_work)) {
DRM_ERROR("failed to create event work.\n");
ret = PTR_ERR(c_node->event_work);
goto err_free_stop;
}

Просмотреть файл

@ -14,6 +14,7 @@
#include <linux/kernel.h>
#include <linux/platform_device.h>
#include <linux/component.h>
#include <drm/exynos_drm.h>
@ -28,7 +29,6 @@
/* vidi has totally three virtual windows. */
#define WINDOWS_NR 3
#define get_vidi_mgr(dev) platform_get_drvdata(to_platform_device(dev))
#define ctx_from_connector(c) container_of(c, struct vidi_context, \
connector)
@ -47,11 +47,13 @@ struct vidi_win_data {
};
struct vidi_context {
struct exynos_drm_manager manager;
struct exynos_drm_display display;
struct platform_device *pdev;
struct drm_device *drm_dev;
struct drm_crtc *crtc;
struct drm_encoder *encoder;
struct drm_connector connector;
struct exynos_drm_subdrv subdrv;
struct vidi_win_data win_data[WINDOWS_NR];
struct edid *raw_edid;
unsigned int clkdiv;
@ -66,6 +68,16 @@ struct vidi_context {
int pipe;
};
static inline struct vidi_context *manager_to_vidi(struct exynos_drm_manager *m)
{
return container_of(m, struct vidi_context, manager);
}
static inline struct vidi_context *display_to_vidi(struct exynos_drm_display *d)
{
return container_of(d, struct vidi_context, display);
}
static const char fake_edid_info[] = {
0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00, 0x4c, 0x2d, 0x05, 0x05,
0x00, 0x00, 0x00, 0x00, 0x30, 0x12, 0x01, 0x03, 0x80, 0x10, 0x09, 0x78,
@ -93,7 +105,7 @@ static const char fake_edid_info[] = {
static void vidi_apply(struct exynos_drm_manager *mgr)
{
struct vidi_context *ctx = mgr->ctx;
struct vidi_context *ctx = manager_to_vidi(mgr);
struct exynos_drm_manager_ops *mgr_ops = mgr->ops;
struct vidi_win_data *win_data;
int i;
@ -110,7 +122,7 @@ static void vidi_apply(struct exynos_drm_manager *mgr)
static void vidi_commit(struct exynos_drm_manager *mgr)
{
struct vidi_context *ctx = mgr->ctx;
struct vidi_context *ctx = manager_to_vidi(mgr);
if (ctx->suspended)
return;
@ -118,7 +130,7 @@ static void vidi_commit(struct exynos_drm_manager *mgr)
static int vidi_enable_vblank(struct exynos_drm_manager *mgr)
{
struct vidi_context *ctx = mgr->ctx;
struct vidi_context *ctx = manager_to_vidi(mgr);
if (ctx->suspended)
return -EPERM;
@ -140,7 +152,7 @@ static int vidi_enable_vblank(struct exynos_drm_manager *mgr)
static void vidi_disable_vblank(struct exynos_drm_manager *mgr)
{
struct vidi_context *ctx = mgr->ctx;
struct vidi_context *ctx = manager_to_vidi(mgr);
if (ctx->suspended)
return;
@ -152,7 +164,7 @@ static void vidi_disable_vblank(struct exynos_drm_manager *mgr)
static void vidi_win_mode_set(struct exynos_drm_manager *mgr,
struct exynos_drm_overlay *overlay)
{
struct vidi_context *ctx = mgr->ctx;
struct vidi_context *ctx = manager_to_vidi(mgr);
struct vidi_win_data *win_data;
int win;
unsigned long offset;
@ -204,7 +216,7 @@ static void vidi_win_mode_set(struct exynos_drm_manager *mgr,
static void vidi_win_commit(struct exynos_drm_manager *mgr, int zpos)
{
struct vidi_context *ctx = mgr->ctx;
struct vidi_context *ctx = manager_to_vidi(mgr);
struct vidi_win_data *win_data;
int win = zpos;
@ -229,7 +241,7 @@ static void vidi_win_commit(struct exynos_drm_manager *mgr, int zpos)
static void vidi_win_disable(struct exynos_drm_manager *mgr, int zpos)
{
struct vidi_context *ctx = mgr->ctx;
struct vidi_context *ctx = manager_to_vidi(mgr);
struct vidi_win_data *win_data;
int win = zpos;
@ -247,7 +259,7 @@ static void vidi_win_disable(struct exynos_drm_manager *mgr, int zpos)
static int vidi_power_on(struct exynos_drm_manager *mgr, bool enable)
{
struct vidi_context *ctx = mgr->ctx;
struct vidi_context *ctx = manager_to_vidi(mgr);
DRM_DEBUG_KMS("%s\n", __FILE__);
@ -271,7 +283,7 @@ static int vidi_power_on(struct exynos_drm_manager *mgr, bool enable)
static void vidi_dpms(struct exynos_drm_manager *mgr, int mode)
{
struct vidi_context *ctx = mgr->ctx;
struct vidi_context *ctx = manager_to_vidi(mgr);
DRM_DEBUG_KMS("%d\n", mode);
@ -297,7 +309,7 @@ static void vidi_dpms(struct exynos_drm_manager *mgr, int mode)
static int vidi_mgr_initialize(struct exynos_drm_manager *mgr,
struct drm_device *drm_dev)
{
struct vidi_context *ctx = mgr->ctx;
struct vidi_context *ctx = manager_to_vidi(mgr);
struct exynos_drm_private *priv = drm_dev->dev_private;
mgr->drm_dev = ctx->drm_dev = drm_dev;
@ -316,11 +328,6 @@ static struct exynos_drm_manager_ops vidi_manager_ops = {
.win_disable = vidi_win_disable,
};
static struct exynos_drm_manager vidi_manager = {
.type = EXYNOS_DISPLAY_TYPE_VIDI,
.ops = &vidi_manager_ops,
};
static void vidi_fake_vblank_handler(struct work_struct *work)
{
struct vidi_context *ctx = container_of(work, struct vidi_context,
@ -349,9 +356,8 @@ static void vidi_fake_vblank_handler(struct work_struct *work)
static int vidi_show_connection(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct vidi_context *ctx = dev_get_drvdata(dev);
int rc;
struct exynos_drm_manager *mgr = get_vidi_mgr(dev);
struct vidi_context *ctx = mgr->ctx;
mutex_lock(&ctx->lock);
@ -366,8 +372,7 @@ static int vidi_store_connection(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t len)
{
struct exynos_drm_manager *mgr = get_vidi_mgr(dev);
struct vidi_context *ctx = mgr->ctx;
struct vidi_context *ctx = dev_get_drvdata(dev);
int ret;
ret = kstrtoint(buf, 0, &ctx->connected);
@ -420,7 +425,7 @@ int vidi_connection_ioctl(struct drm_device *drm_dev, void *data,
display = exynos_drm_get_display(encoder);
if (display->type == EXYNOS_DISPLAY_TYPE_VIDI) {
ctx = display->ctx;
ctx = display_to_vidi(display);
break;
}
}
@ -530,7 +535,7 @@ static struct drm_connector_helper_funcs vidi_connector_helper_funcs = {
static int vidi_create_connector(struct exynos_drm_display *display,
struct drm_encoder *encoder)
{
struct vidi_context *ctx = display->ctx;
struct vidi_context *ctx = display_to_vidi(display);
struct drm_connector *connector = &ctx->connector;
int ret;
@ -556,27 +561,22 @@ static struct exynos_drm_display_ops vidi_display_ops = {
.create_connector = vidi_create_connector,
};
static struct exynos_drm_display vidi_display = {
.type = EXYNOS_DISPLAY_TYPE_VIDI,
.ops = &vidi_display_ops,
};
static int vidi_subdrv_probe(struct drm_device *drm_dev, struct device *dev)
static int vidi_bind(struct device *dev, struct device *master, void *data)
{
struct exynos_drm_manager *mgr = get_vidi_mgr(dev);
struct vidi_context *ctx = mgr->ctx;
struct vidi_context *ctx = dev_get_drvdata(dev);
struct drm_device *drm_dev = data;
struct drm_crtc *crtc = ctx->crtc;
int ret;
vidi_mgr_initialize(mgr, drm_dev);
vidi_mgr_initialize(&ctx->manager, drm_dev);
ret = exynos_drm_crtc_create(&vidi_manager);
ret = exynos_drm_crtc_create(&ctx->manager);
if (ret) {
DRM_ERROR("failed to create crtc.\n");
return ret;
}
ret = exynos_drm_create_enc_conn(drm_dev, &vidi_display);
ret = exynos_drm_create_enc_conn(drm_dev, &ctx->display);
if (ret) {
crtc->funcs->destroy(crtc);
DRM_ERROR("failed to create encoder and connector.\n");
@ -586,9 +586,18 @@ static int vidi_subdrv_probe(struct drm_device *drm_dev, struct device *dev)
return 0;
}
static void vidi_unbind(struct device *dev, struct device *master, void *data)
{
}
static const struct component_ops vidi_component_ops = {
.bind = vidi_bind,
.unbind = vidi_unbind,
};
static int vidi_probe(struct platform_device *pdev)
{
struct exynos_drm_subdrv *subdrv;
struct vidi_context *ctx;
int ret;
@ -596,40 +605,54 @@ static int vidi_probe(struct platform_device *pdev)
if (!ctx)
return -ENOMEM;
ctx->manager.type = EXYNOS_DISPLAY_TYPE_VIDI;
ctx->manager.ops = &vidi_manager_ops;
ctx->display.type = EXYNOS_DISPLAY_TYPE_VIDI;
ctx->display.ops = &vidi_display_ops;
ctx->default_win = 0;
ctx->pdev = pdev;
ret = exynos_drm_component_add(&pdev->dev, EXYNOS_DEVICE_TYPE_CRTC,
ctx->manager.type);
if (ret)
return ret;
ret = exynos_drm_component_add(&pdev->dev, EXYNOS_DEVICE_TYPE_CONNECTOR,
ctx->display.type);
if (ret)
goto err_del_crtc_component;
INIT_WORK(&ctx->work, vidi_fake_vblank_handler);
vidi_manager.ctx = ctx;
vidi_display.ctx = ctx;
mutex_init(&ctx->lock);
platform_set_drvdata(pdev, &vidi_manager);
subdrv = &ctx->subdrv;
subdrv->dev = &pdev->dev;
subdrv->probe = vidi_subdrv_probe;
ret = exynos_drm_subdrv_register(subdrv);
if (ret < 0) {
dev_err(&pdev->dev, "failed to register drm vidi device\n");
return ret;
}
platform_set_drvdata(pdev, ctx);
ret = device_create_file(&pdev->dev, &dev_attr_connection);
if (ret < 0) {
exynos_drm_subdrv_unregister(subdrv);
DRM_INFO("failed to create connection sysfs.\n");
DRM_ERROR("failed to create connection sysfs.\n");
goto err_del_conn_component;
}
return 0;
ret = component_add(&pdev->dev, &vidi_component_ops);
if (ret)
goto err_remove_file;
return ret;
err_remove_file:
device_remove_file(&pdev->dev, &dev_attr_connection);
err_del_conn_component:
exynos_drm_component_del(&pdev->dev, EXYNOS_DEVICE_TYPE_CONNECTOR);
err_del_crtc_component:
exynos_drm_component_del(&pdev->dev, EXYNOS_DEVICE_TYPE_CRTC);
return ret;
}
static int vidi_remove(struct platform_device *pdev)
{
struct exynos_drm_manager *mgr = platform_get_drvdata(pdev);
struct vidi_context *ctx = mgr->ctx;
struct vidi_context *ctx = platform_get_drvdata(pdev);
if (ctx->raw_edid != (struct edid *)fake_edid_info) {
kfree(ctx->raw_edid);
@ -638,6 +661,10 @@ static int vidi_remove(struct platform_device *pdev)
return -EINVAL;
}
component_del(&pdev->dev, &vidi_component_ops);
exynos_drm_component_del(&pdev->dev, EXYNOS_DEVICE_TYPE_CONNECTOR);
exynos_drm_component_del(&pdev->dev, EXYNOS_DEVICE_TYPE_CRTC);
return 0;
}
@ -668,12 +695,19 @@ int exynos_drm_probe_vidi(void)
return ret;
}
static int exynos_drm_remove_vidi_device(struct device *dev, void *data)
{
platform_device_unregister(to_platform_device(dev));
return 0;
}
void exynos_drm_remove_vidi(void)
{
struct vidi_context *ctx = vidi_manager.ctx;
struct exynos_drm_subdrv *subdrv = &ctx->subdrv;
struct platform_device *pdev = to_platform_device(subdrv->dev);
int ret = driver_for_each_device(&vidi_driver.driver, NULL, NULL,
exynos_drm_remove_vidi_device);
/* silence compiler warning */
(void)ret;
platform_driver_unregister(&vidi_driver);
platform_device_unregister(pdev);
}

Просмотреть файл

@ -49,7 +49,6 @@
#include <linux/gpio.h>
#include <media/s5p_hdmi.h>
#define get_hdmi_display(dev) platform_get_drvdata(to_platform_device(dev))
#define ctx_from_connector(c) container_of(c, struct hdmi_context, connector)
#define HOTPLUG_DEBOUNCE_MS 1100
@ -182,6 +181,7 @@ struct hdmi_conf_regs {
};
struct hdmi_context {
struct exynos_drm_display display;
struct device *dev;
struct drm_device *drm_dev;
struct drm_connector connector;
@ -213,6 +213,11 @@ struct hdmi_context {
enum hdmi_type type;
};
static inline struct hdmi_context *display_to_hdmi(struct exynos_drm_display *d)
{
return container_of(d, struct hdmi_context, display);
}
struct hdmiphy_config {
int pixel_clock;
u8 conf[32];
@ -1123,7 +1128,7 @@ static struct drm_connector_helper_funcs hdmi_connector_helper_funcs = {
static int hdmi_create_connector(struct exynos_drm_display *display,
struct drm_encoder *encoder)
{
struct hdmi_context *hdata = display->ctx;
struct hdmi_context *hdata = display_to_hdmi(display);
struct drm_connector *connector = &hdata->connector;
int ret;
@ -2000,7 +2005,7 @@ static void hdmi_v14_mode_set(struct hdmi_context *hdata,
static void hdmi_mode_set(struct exynos_drm_display *display,
struct drm_display_mode *mode)
{
struct hdmi_context *hdata = display->ctx;
struct hdmi_context *hdata = display_to_hdmi(display);
struct drm_display_mode *m = mode;
DRM_DEBUG_KMS("xres=%d, yres=%d, refresh=%d, intl=%s\n",
@ -2019,7 +2024,7 @@ static void hdmi_mode_set(struct exynos_drm_display *display,
static void hdmi_commit(struct exynos_drm_display *display)
{
struct hdmi_context *hdata = display->ctx;
struct hdmi_context *hdata = display_to_hdmi(display);
mutex_lock(&hdata->hdmi_mutex);
if (!hdata->powered) {
@ -2033,7 +2038,7 @@ static void hdmi_commit(struct exynos_drm_display *display)
static void hdmi_poweron(struct exynos_drm_display *display)
{
struct hdmi_context *hdata = display->ctx;
struct hdmi_context *hdata = display_to_hdmi(display);
struct hdmi_resources *res = &hdata->res;
mutex_lock(&hdata->hdmi_mutex);
@ -2064,7 +2069,7 @@ static void hdmi_poweron(struct exynos_drm_display *display)
static void hdmi_poweroff(struct exynos_drm_display *display)
{
struct hdmi_context *hdata = display->ctx;
struct hdmi_context *hdata = display_to_hdmi(display);
struct hdmi_resources *res = &hdata->res;
mutex_lock(&hdata->hdmi_mutex);
@ -2099,7 +2104,7 @@ out:
static void hdmi_dpms(struct exynos_drm_display *display, int mode)
{
struct hdmi_context *hdata = display->ctx;
struct hdmi_context *hdata = display_to_hdmi(display);
struct drm_encoder *encoder = hdata->encoder;
struct drm_crtc *crtc = encoder->crtc;
struct drm_crtc_helper_funcs *funcs = NULL;
@ -2143,11 +2148,6 @@ static struct exynos_drm_display_ops hdmi_display_ops = {
.commit = hdmi_commit,
};
static struct exynos_drm_display hdmi_display = {
.type = EXYNOS_DISPLAY_TYPE_HDMI,
.ops = &hdmi_display_ops,
};
static void hdmi_hotplug_work_func(struct work_struct *work)
{
struct hdmi_context *hdata;
@ -2302,12 +2302,11 @@ MODULE_DEVICE_TABLE (of, hdmi_match_types);
static int hdmi_bind(struct device *dev, struct device *master, void *data)
{
struct drm_device *drm_dev = data;
struct hdmi_context *hdata;
struct hdmi_context *hdata = dev_get_drvdata(dev);
hdata = hdmi_display.ctx;
hdata->drm_dev = drm_dev;
return exynos_drm_create_enc_conn(drm_dev, &hdmi_display);
return exynos_drm_create_enc_conn(drm_dev, &hdata->display);
}
static void hdmi_unbind(struct device *dev, struct device *master, void *data)
@ -2349,31 +2348,28 @@ static int hdmi_probe(struct platform_device *pdev)
struct resource *res;
int ret;
if (!dev->of_node)
return -ENODEV;
pdata = drm_hdmi_dt_parse_pdata(dev);
if (!pdata)
return -EINVAL;
hdata = devm_kzalloc(dev, sizeof(struct hdmi_context), GFP_KERNEL);
if (!hdata)
return -ENOMEM;
hdata->display.type = EXYNOS_DISPLAY_TYPE_HDMI;
hdata->display.ops = &hdmi_display_ops;
ret = exynos_drm_component_add(&pdev->dev, EXYNOS_DEVICE_TYPE_CONNECTOR,
hdmi_display.type);
hdata->display.type);
if (ret)
return ret;
if (!dev->of_node) {
ret = -ENODEV;
goto err_del_component;
}
pdata = drm_hdmi_dt_parse_pdata(dev);
if (!pdata) {
ret = -EINVAL;
goto err_del_component;
}
hdata = devm_kzalloc(dev, sizeof(struct hdmi_context), GFP_KERNEL);
if (!hdata) {
ret = -ENOMEM;
goto err_del_component;
}
mutex_init(&hdata->hdmi_mutex);
platform_set_drvdata(pdev, &hdmi_display);
platform_set_drvdata(pdev, hdata);
match = of_match_node(hdmi_match_types, dev->of_node);
if (!match) {
@ -2485,7 +2481,6 @@ out_get_phy_port:
}
pm_runtime_enable(dev);
hdmi_display.ctx = hdata;
ret = component_add(&pdev->dev, &hdmi_component_ops);
if (ret)
@ -2510,7 +2505,7 @@ err_del_component:
static int hdmi_remove(struct platform_device *pdev)
{
struct hdmi_context *hdata = hdmi_display.ctx;
struct hdmi_context *hdata = platform_get_drvdata(pdev);
cancel_delayed_work_sync(&hdata->hotplug_work);

Просмотреть файл

@ -40,8 +40,6 @@
#include "exynos_drm_iommu.h"
#include "exynos_mixer.h"
#define get_mixer_manager(dev) platform_get_drvdata(to_platform_device(dev))
#define MIXER_WIN_NR 3
#define MIXER_DEFAULT_WIN 0
@ -86,6 +84,7 @@ enum mixer_version_id {
};
struct mixer_context {
struct exynos_drm_manager manager;
struct platform_device *pdev;
struct device *dev;
struct drm_device *drm_dev;
@ -104,6 +103,11 @@ struct mixer_context {
atomic_t wait_vsync_event;
};
static inline struct mixer_context *mgr_to_mixer(struct exynos_drm_manager *mgr)
{
return container_of(mgr, struct mixer_context, manager);
}
struct mixer_drv_data {
enum mixer_version_id version;
bool is_vp_enabled;
@ -854,7 +858,7 @@ static int mixer_initialize(struct exynos_drm_manager *mgr,
struct drm_device *drm_dev)
{
int ret;
struct mixer_context *mixer_ctx = mgr->ctx;
struct mixer_context *mixer_ctx = mgr_to_mixer(mgr);
struct exynos_drm_private *priv;
priv = drm_dev->dev_private;
@ -885,7 +889,7 @@ static int mixer_initialize(struct exynos_drm_manager *mgr,
static void mixer_mgr_remove(struct exynos_drm_manager *mgr)
{
struct mixer_context *mixer_ctx = mgr->ctx;
struct mixer_context *mixer_ctx = mgr_to_mixer(mgr);
if (is_drm_iommu_supported(mixer_ctx->drm_dev))
drm_iommu_detach_device(mixer_ctx->drm_dev, mixer_ctx->dev);
@ -893,7 +897,7 @@ static void mixer_mgr_remove(struct exynos_drm_manager *mgr)
static int mixer_enable_vblank(struct exynos_drm_manager *mgr)
{
struct mixer_context *mixer_ctx = mgr->ctx;
struct mixer_context *mixer_ctx = mgr_to_mixer(mgr);
struct mixer_resources *res = &mixer_ctx->mixer_res;
if (!mixer_ctx->powered) {
@ -910,7 +914,7 @@ static int mixer_enable_vblank(struct exynos_drm_manager *mgr)
static void mixer_disable_vblank(struct exynos_drm_manager *mgr)
{
struct mixer_context *mixer_ctx = mgr->ctx;
struct mixer_context *mixer_ctx = mgr_to_mixer(mgr);
struct mixer_resources *res = &mixer_ctx->mixer_res;
/* disable vsync interrupt */
@ -920,7 +924,7 @@ static void mixer_disable_vblank(struct exynos_drm_manager *mgr)
static void mixer_win_mode_set(struct exynos_drm_manager *mgr,
struct exynos_drm_overlay *overlay)
{
struct mixer_context *mixer_ctx = mgr->ctx;
struct mixer_context *mixer_ctx = mgr_to_mixer(mgr);
struct hdmi_win_data *win_data;
int win;
@ -971,7 +975,7 @@ static void mixer_win_mode_set(struct exynos_drm_manager *mgr,
static void mixer_win_commit(struct exynos_drm_manager *mgr, int zpos)
{
struct mixer_context *mixer_ctx = mgr->ctx;
struct mixer_context *mixer_ctx = mgr_to_mixer(mgr);
int win = zpos == DEFAULT_ZPOS ? MIXER_DEFAULT_WIN : zpos;
DRM_DEBUG_KMS("win: %d\n", win);
@ -993,7 +997,7 @@ static void mixer_win_commit(struct exynos_drm_manager *mgr, int zpos)
static void mixer_win_disable(struct exynos_drm_manager *mgr, int zpos)
{
struct mixer_context *mixer_ctx = mgr->ctx;
struct mixer_context *mixer_ctx = mgr_to_mixer(mgr);
struct mixer_resources *res = &mixer_ctx->mixer_res;
int win = zpos == DEFAULT_ZPOS ? MIXER_DEFAULT_WIN : zpos;
unsigned long flags;
@ -1021,7 +1025,7 @@ static void mixer_win_disable(struct exynos_drm_manager *mgr, int zpos)
static void mixer_wait_for_vblank(struct exynos_drm_manager *mgr)
{
struct mixer_context *mixer_ctx = mgr->ctx;
struct mixer_context *mixer_ctx = mgr_to_mixer(mgr);
mutex_lock(&mixer_ctx->mixer_mutex);
if (!mixer_ctx->powered) {
@ -1048,7 +1052,7 @@ static void mixer_wait_for_vblank(struct exynos_drm_manager *mgr)
static void mixer_window_suspend(struct exynos_drm_manager *mgr)
{
struct mixer_context *ctx = mgr->ctx;
struct mixer_context *ctx = mgr_to_mixer(mgr);
struct hdmi_win_data *win_data;
int i;
@ -1062,7 +1066,7 @@ static void mixer_window_suspend(struct exynos_drm_manager *mgr)
static void mixer_window_resume(struct exynos_drm_manager *mgr)
{
struct mixer_context *ctx = mgr->ctx;
struct mixer_context *ctx = mgr_to_mixer(mgr);
struct hdmi_win_data *win_data;
int i;
@ -1077,7 +1081,7 @@ static void mixer_window_resume(struct exynos_drm_manager *mgr)
static void mixer_poweron(struct exynos_drm_manager *mgr)
{
struct mixer_context *ctx = mgr->ctx;
struct mixer_context *ctx = mgr_to_mixer(mgr);
struct mixer_resources *res = &ctx->mixer_res;
mutex_lock(&ctx->mixer_mutex);
@ -1111,7 +1115,7 @@ static void mixer_poweron(struct exynos_drm_manager *mgr)
static void mixer_poweroff(struct exynos_drm_manager *mgr)
{
struct mixer_context *ctx = mgr->ctx;
struct mixer_context *ctx = mgr_to_mixer(mgr);
struct mixer_resources *res = &ctx->mixer_res;
mutex_lock(&ctx->mixer_mutex);
@ -1187,11 +1191,6 @@ static struct exynos_drm_manager_ops mixer_manager_ops = {
.win_disable = mixer_win_disable,
};
static struct exynos_drm_manager mixer_manager = {
.type = EXYNOS_DISPLAY_TYPE_HDMI,
.ops = &mixer_manager_ops,
};
static struct mixer_drv_data exynos5420_mxr_drv_data = {
.version = MXR_VER_128_0_0_184,
.is_vp_enabled = 0,
@ -1249,13 +1248,45 @@ MODULE_DEVICE_TABLE(of, mixer_match_types);
static int mixer_bind(struct device *dev, struct device *manager, void *data)
{
struct platform_device *pdev = to_platform_device(dev);
struct mixer_context *ctx = dev_get_drvdata(dev);
struct drm_device *drm_dev = data;
struct mixer_context *ctx;
struct mixer_drv_data *drv;
int ret;
dev_info(dev, "probe start\n");
ret = mixer_initialize(&ctx->manager, drm_dev);
if (ret)
return ret;
ret = exynos_drm_crtc_create(&ctx->manager);
if (ret) {
mixer_mgr_remove(&ctx->manager);
return ret;
}
pm_runtime_enable(dev);
return 0;
}
static void mixer_unbind(struct device *dev, struct device *master, void *data)
{
struct mixer_context *ctx = dev_get_drvdata(dev);
mixer_mgr_remove(&ctx->manager);
pm_runtime_disable(dev);
}
static const struct component_ops mixer_component_ops = {
.bind = mixer_bind,
.unbind = mixer_unbind,
};
static int mixer_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct mixer_drv_data *drv;
struct mixer_context *ctx;
int ret;
ctx = devm_kzalloc(&pdev->dev, sizeof(*ctx), GFP_KERNEL);
if (!ctx) {
@ -1265,8 +1296,12 @@ static int mixer_bind(struct device *dev, struct device *manager, void *data)
mutex_init(&ctx->mixer_mutex);
ctx->manager.type = EXYNOS_DISPLAY_TYPE_HDMI;
ctx->manager.ops = &mixer_manager_ops;
if (dev->of_node) {
const struct of_device_id *match;
match = of_match_node(mixer_match_types, dev->of_node);
drv = (struct mixer_drv_data *)match->data;
} else {
@ -1282,57 +1317,28 @@ static int mixer_bind(struct device *dev, struct device *manager, void *data)
init_waitqueue_head(&ctx->wait_vsync_queue);
atomic_set(&ctx->wait_vsync_event, 0);
mixer_manager.ctx = ctx;
ret = mixer_initialize(&mixer_manager, drm_dev);
if (ret)
return ret;
platform_set_drvdata(pdev, &mixer_manager);
ret = exynos_drm_crtc_create(&mixer_manager);
if (ret) {
mixer_mgr_remove(&mixer_manager);
return ret;
}
pm_runtime_enable(dev);
return 0;
}
static void mixer_unbind(struct device *dev, struct device *master, void *data)
{
struct exynos_drm_manager *mgr = dev_get_drvdata(dev);
dev_info(dev, "remove successful\n");
mixer_mgr_remove(mgr);
pm_runtime_disable(dev);
}
static const struct component_ops mixer_component_ops = {
.bind = mixer_bind,
.unbind = mixer_unbind,
};
static int mixer_probe(struct platform_device *pdev)
{
int ret;
platform_set_drvdata(pdev, ctx);
ret = exynos_drm_component_add(&pdev->dev, EXYNOS_DEVICE_TYPE_CRTC,
mixer_manager.type);
ctx->manager.type);
if (ret)
return ret;
ret = component_add(&pdev->dev, &mixer_component_ops);
if (ret)
if (ret) {
exynos_drm_component_del(&pdev->dev, EXYNOS_DEVICE_TYPE_CRTC);
return ret;
}
pm_runtime_enable(dev);
return ret;
}
static int mixer_remove(struct platform_device *pdev)
{
pm_runtime_disable(&pdev->dev);
component_del(&pdev->dev, &mixer_component_ops);
exynos_drm_component_del(&pdev->dev, EXYNOS_DEVICE_TYPE_CRTC);

Просмотреть файл

@ -39,6 +39,7 @@ gma500_gfx-$(CONFIG_DRM_GMA3600) += cdv_device.o \
gma500_gfx-$(CONFIG_DRM_GMA600) += oaktrail_device.o \
oaktrail_crtc.o \
oaktrail_lvds.o \
oaktrail_lvds_i2c.o \
oaktrail_hdmi.o \
oaktrail_hdmi_i2c.o

Просмотреть файл

@ -37,6 +37,201 @@
#include "gma_display.h"
#include <drm/drm_dp_helper.h>
/**
* struct i2c_algo_dp_aux_data - driver interface structure for i2c over dp
* aux algorithm
* @running: set by the algo indicating whether an i2c is ongoing or whether
* the i2c bus is quiescent
* @address: i2c target address for the currently ongoing transfer
* @aux_ch: driver callback to transfer a single byte of the i2c payload
*/
struct i2c_algo_dp_aux_data {
bool running;
u16 address;
int (*aux_ch) (struct i2c_adapter *adapter,
int mode, uint8_t write_byte,
uint8_t *read_byte);
};
/* Run a single AUX_CH I2C transaction, writing/reading data as necessary */
static int
i2c_algo_dp_aux_transaction(struct i2c_adapter *adapter, int mode,
uint8_t write_byte, uint8_t *read_byte)
{
struct i2c_algo_dp_aux_data *algo_data = adapter->algo_data;
int ret;
ret = (*algo_data->aux_ch)(adapter, mode,
write_byte, read_byte);
return ret;
}
/*
* I2C over AUX CH
*/
/*
* Send the address. If the I2C link is running, this 'restarts'
* the connection with the new address, this is used for doing
* a write followed by a read (as needed for DDC)
*/
static int
i2c_algo_dp_aux_address(struct i2c_adapter *adapter, u16 address, bool reading)
{
struct i2c_algo_dp_aux_data *algo_data = adapter->algo_data;
int mode = MODE_I2C_START;
int ret;
if (reading)
mode |= MODE_I2C_READ;
else
mode |= MODE_I2C_WRITE;
algo_data->address = address;
algo_data->running = true;
ret = i2c_algo_dp_aux_transaction(adapter, mode, 0, NULL);
return ret;
}
/*
* Stop the I2C transaction. This closes out the link, sending
* a bare address packet with the MOT bit turned off
*/
static void
i2c_algo_dp_aux_stop(struct i2c_adapter *adapter, bool reading)
{
struct i2c_algo_dp_aux_data *algo_data = adapter->algo_data;
int mode = MODE_I2C_STOP;
if (reading)
mode |= MODE_I2C_READ;
else
mode |= MODE_I2C_WRITE;
if (algo_data->running) {
(void) i2c_algo_dp_aux_transaction(adapter, mode, 0, NULL);
algo_data->running = false;
}
}
/*
* Write a single byte to the current I2C address, the
* the I2C link must be running or this returns -EIO
*/
static int
i2c_algo_dp_aux_put_byte(struct i2c_adapter *adapter, u8 byte)
{
struct i2c_algo_dp_aux_data *algo_data = adapter->algo_data;
int ret;
if (!algo_data->running)
return -EIO;
ret = i2c_algo_dp_aux_transaction(adapter, MODE_I2C_WRITE, byte, NULL);
return ret;
}
/*
* Read a single byte from the current I2C address, the
* I2C link must be running or this returns -EIO
*/
static int
i2c_algo_dp_aux_get_byte(struct i2c_adapter *adapter, u8 *byte_ret)
{
struct i2c_algo_dp_aux_data *algo_data = adapter->algo_data;
int ret;
if (!algo_data->running)
return -EIO;
ret = i2c_algo_dp_aux_transaction(adapter, MODE_I2C_READ, 0, byte_ret);
return ret;
}
static int
i2c_algo_dp_aux_xfer(struct i2c_adapter *adapter,
struct i2c_msg *msgs,
int num)
{
int ret = 0;
bool reading = false;
int m;
int b;
for (m = 0; m < num; m++) {
u16 len = msgs[m].len;
u8 *buf = msgs[m].buf;
reading = (msgs[m].flags & I2C_M_RD) != 0;
ret = i2c_algo_dp_aux_address(adapter, msgs[m].addr, reading);
if (ret < 0)
break;
if (reading) {
for (b = 0; b < len; b++) {
ret = i2c_algo_dp_aux_get_byte(adapter, &buf[b]);
if (ret < 0)
break;
}
} else {
for (b = 0; b < len; b++) {
ret = i2c_algo_dp_aux_put_byte(adapter, buf[b]);
if (ret < 0)
break;
}
}
if (ret < 0)
break;
}
if (ret >= 0)
ret = num;
i2c_algo_dp_aux_stop(adapter, reading);
DRM_DEBUG_KMS("dp_aux_xfer return %d\n", ret);
return ret;
}
static u32
i2c_algo_dp_aux_functionality(struct i2c_adapter *adapter)
{
return I2C_FUNC_I2C | I2C_FUNC_SMBUS_EMUL |
I2C_FUNC_SMBUS_READ_BLOCK_DATA |
I2C_FUNC_SMBUS_BLOCK_PROC_CALL |
I2C_FUNC_10BIT_ADDR;
}
static const struct i2c_algorithm i2c_dp_aux_algo = {
.master_xfer = i2c_algo_dp_aux_xfer,
.functionality = i2c_algo_dp_aux_functionality,
};
static void
i2c_dp_aux_reset_bus(struct i2c_adapter *adapter)
{
(void) i2c_algo_dp_aux_address(adapter, 0, false);
(void) i2c_algo_dp_aux_stop(adapter, false);
}
static int
i2c_dp_aux_prepare_bus(struct i2c_adapter *adapter)
{
adapter->algo = &i2c_dp_aux_algo;
adapter->retries = 3;
i2c_dp_aux_reset_bus(adapter);
return 0;
}
/*
* FIXME: This is the old dp aux helper, gma500 is the last driver that needs to
* be ported over to the new helper code in drm_dp_helper.c like i915 or radeon.
*/
static int __deprecated
i2c_dp_aux_add_bus(struct i2c_adapter *adapter)
{
int error;
error = i2c_dp_aux_prepare_bus(adapter);
if (error)
return error;
error = i2c_add_adapter(adapter);
return error;
}
#define _wait_for(COND, MS, W) ({ \
unsigned long timeout__ = jiffies + msecs_to_jiffies(MS); \
int ret__ = 0; \

Просмотреть файл

@ -25,6 +25,7 @@
*/
#include <linux/freezer.h>
#include <video/mipi_display.h>
#include "mdfld_dsi_output.h"
#include "mdfld_dsi_pkg_sender.h"
@ -32,20 +33,6 @@
#define MDFLD_DSI_READ_MAX_COUNT 5000
enum data_type {
DSI_DT_GENERIC_SHORT_WRITE_0 = 0x03,
DSI_DT_GENERIC_SHORT_WRITE_1 = 0x13,
DSI_DT_GENERIC_SHORT_WRITE_2 = 0x23,
DSI_DT_GENERIC_READ_0 = 0x04,
DSI_DT_GENERIC_READ_1 = 0x14,
DSI_DT_GENERIC_READ_2 = 0x24,
DSI_DT_GENERIC_LONG_WRITE = 0x29,
DSI_DT_DCS_SHORT_WRITE_0 = 0x05,
DSI_DT_DCS_SHORT_WRITE_1 = 0x15,
DSI_DT_DCS_READ = 0x06,
DSI_DT_DCS_LONG_WRITE = 0x39,
};
enum {
MDFLD_DSI_PANEL_MODE_SLEEP = 0x1,
};
@ -321,9 +308,9 @@ static int send_pkg_prepare(struct mdfld_dsi_pkg_sender *sender, u8 data_type,
u8 cmd;
switch (data_type) {
case DSI_DT_DCS_SHORT_WRITE_0:
case DSI_DT_DCS_SHORT_WRITE_1:
case DSI_DT_DCS_LONG_WRITE:
case MIPI_DSI_DCS_SHORT_WRITE:
case MIPI_DSI_DCS_SHORT_WRITE_PARAM:
case MIPI_DSI_DCS_LONG_WRITE:
cmd = *data;
break;
default:
@ -334,12 +321,12 @@ static int send_pkg_prepare(struct mdfld_dsi_pkg_sender *sender, u8 data_type,
sender->status = MDFLD_DSI_PKG_SENDER_BUSY;
/*wait for 120 milliseconds in case exit_sleep_mode just be sent*/
if (unlikely(cmd == DCS_ENTER_SLEEP_MODE)) {
if (unlikely(cmd == MIPI_DCS_ENTER_SLEEP_MODE)) {
/*TODO: replace it with msleep later*/
mdelay(120);
}
if (unlikely(cmd == DCS_EXIT_SLEEP_MODE)) {
if (unlikely(cmd == MIPI_DCS_EXIT_SLEEP_MODE)) {
/*TODO: replace it with msleep later*/
mdelay(120);
}
@ -352,9 +339,9 @@ static int send_pkg_done(struct mdfld_dsi_pkg_sender *sender, u8 data_type,
u8 cmd;
switch (data_type) {
case DSI_DT_DCS_SHORT_WRITE_0:
case DSI_DT_DCS_SHORT_WRITE_1:
case DSI_DT_DCS_LONG_WRITE:
case MIPI_DSI_DCS_SHORT_WRITE:
case MIPI_DSI_DCS_SHORT_WRITE_PARAM:
case MIPI_DSI_DCS_LONG_WRITE:
cmd = *data;
break;
default:
@ -362,15 +349,15 @@ static int send_pkg_done(struct mdfld_dsi_pkg_sender *sender, u8 data_type,
}
/*update panel status*/
if (unlikely(cmd == DCS_ENTER_SLEEP_MODE)) {
if (unlikely(cmd == MIPI_DCS_ENTER_SLEEP_MODE)) {
sender->panel_mode |= MDFLD_DSI_PANEL_MODE_SLEEP;
/*TODO: replace it with msleep later*/
mdelay(120);
} else if (unlikely(cmd == DCS_EXIT_SLEEP_MODE)) {
} else if (unlikely(cmd == MIPI_DCS_EXIT_SLEEP_MODE)) {
sender->panel_mode &= ~MDFLD_DSI_PANEL_MODE_SLEEP;
/*TODO: replace it with msleep later*/
mdelay(120);
} else if (unlikely(cmd == DCS_SOFT_RESET)) {
} else if (unlikely(cmd == MIPI_DCS_SOFT_RESET)) {
/*TODO: replace it with msleep later*/
mdelay(5);
}
@ -405,19 +392,19 @@ static int send_pkg(struct mdfld_dsi_pkg_sender *sender, u8 data_type,
}
switch (data_type) {
case DSI_DT_GENERIC_SHORT_WRITE_0:
case DSI_DT_GENERIC_SHORT_WRITE_1:
case DSI_DT_GENERIC_SHORT_WRITE_2:
case DSI_DT_GENERIC_READ_0:
case DSI_DT_GENERIC_READ_1:
case DSI_DT_GENERIC_READ_2:
case DSI_DT_DCS_SHORT_WRITE_0:
case DSI_DT_DCS_SHORT_WRITE_1:
case DSI_DT_DCS_READ:
case MIPI_DSI_GENERIC_SHORT_WRITE_0_PARAM:
case MIPI_DSI_GENERIC_SHORT_WRITE_1_PARAM:
case MIPI_DSI_GENERIC_SHORT_WRITE_2_PARAM:
case MIPI_DSI_GENERIC_READ_REQUEST_0_PARAM:
case MIPI_DSI_GENERIC_READ_REQUEST_1_PARAM:
case MIPI_DSI_GENERIC_READ_REQUEST_2_PARAM:
case MIPI_DSI_DCS_SHORT_WRITE:
case MIPI_DSI_DCS_SHORT_WRITE_PARAM:
case MIPI_DSI_DCS_READ:
ret = send_short_pkg(sender, data_type, data[0], data[1], hs);
break;
case DSI_DT_GENERIC_LONG_WRITE:
case DSI_DT_DCS_LONG_WRITE:
case MIPI_DSI_GENERIC_LONG_WRITE:
case MIPI_DSI_DCS_LONG_WRITE:
ret = send_long_pkg(sender, data_type, data, len, hs);
break;
}
@ -440,7 +427,7 @@ int mdfld_dsi_send_mcs_long(struct mdfld_dsi_pkg_sender *sender, u8 *data,
}
spin_lock_irqsave(&sender->lock, flags);
send_pkg(sender, DSI_DT_DCS_LONG_WRITE, data, len, hs);
send_pkg(sender, MIPI_DSI_DCS_LONG_WRITE, data, len, hs);
spin_unlock_irqrestore(&sender->lock, flags);
return 0;
@ -461,10 +448,10 @@ int mdfld_dsi_send_mcs_short(struct mdfld_dsi_pkg_sender *sender, u8 cmd,
data[0] = cmd;
if (param_num) {
data_type = DSI_DT_DCS_SHORT_WRITE_1;
data_type = MIPI_DSI_DCS_SHORT_WRITE_PARAM;
data[1] = param;
} else {
data_type = DSI_DT_DCS_SHORT_WRITE_0;
data_type = MIPI_DSI_DCS_SHORT_WRITE;
data[1] = 0;
}
@ -489,17 +476,17 @@ int mdfld_dsi_send_gen_short(struct mdfld_dsi_pkg_sender *sender, u8 param0,
switch (param_num) {
case 0:
data_type = DSI_DT_GENERIC_SHORT_WRITE_0;
data_type = MIPI_DSI_GENERIC_SHORT_WRITE_0_PARAM;
data[0] = 0;
data[1] = 0;
break;
case 1:
data_type = DSI_DT_GENERIC_SHORT_WRITE_1;
data_type = MIPI_DSI_GENERIC_SHORT_WRITE_1_PARAM;
data[0] = param0;
data[1] = 0;
break;
case 2:
data_type = DSI_DT_GENERIC_SHORT_WRITE_2;
data_type = MIPI_DSI_GENERIC_SHORT_WRITE_2_PARAM;
data[0] = param0;
data[1] = param1;
break;
@ -523,7 +510,7 @@ int mdfld_dsi_send_gen_long(struct mdfld_dsi_pkg_sender *sender, u8 *data,
}
spin_lock_irqsave(&sender->lock, flags);
send_pkg(sender, DSI_DT_GENERIC_LONG_WRITE, data, len, hs);
send_pkg(sender, MIPI_DSI_GENERIC_LONG_WRITE, data, len, hs);
spin_unlock_irqrestore(&sender->lock, flags);
return 0;
@ -594,7 +581,7 @@ int mdfld_dsi_read_mcs(struct mdfld_dsi_pkg_sender *sender, u8 cmd,
return -EINVAL;
}
return __read_panel_data(sender, DSI_DT_DCS_READ, &cmd, 1,
return __read_panel_data(sender, MIPI_DSI_DCS_READ, &cmd, 1,
data, len, hs);
}

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше