- Separate hardware and uapi state (Maarten)
 
 - Expose a number of sprite and plane formats (Ville)
 
 - DDC symlink in HDMI connector sysfs directory (Andrzej Pietrasiewicz)
 
 - Improve obj->mm.lock nesting lock annotation (Daniel)
   (Includes lockdep changes)
 
 - Selftest improvements across the board (Chris)
 
 - ICL/TGL VDSC support on DSI (Jani, Vandita)
 
 - TGL DSB fixes (Animesh, Lucas, Tvrtko)
 
 - VBT parsing improvements and fixes (Lucas, Matt, José, Jani, Dan Carpenter)
 
 - Fix LPSS vs. PMIC PWM backlight use on BYT/CHT (Hans)
   (Includes ACPI+MFD changes)
 
 - Display state, crtc, plane code refactoring (Ville)
 
 - Set opregion chpd value to indicate the driver handles hotplug (Hans de Goede)
 
 - DSI updates and fixes, TGL pipe D support, port mapping (José, Jani, Vandita)
 
 - Make HDCP 2.2 support cover CFL (Juston Li)
 
 - Fix CML PCI IDs and ULT (Shawn Lee)
 
 - CMP-V PCH fix (Imre)
 
 - TGL: Add another TGL PCH ID (James)
 
 - EHL/JSL: Add new PCI IDs (James)
 
 - Rename pipe update tracepoints (Ville)
 
 - Fix FBC on GLK+ (Ville)
 
 - GuC fixes and improvements (Daniele, Don Hiatt, Stuart Summers, Matthew Brost)
 
 - Display debugfs improvements (Ville)
 
 - Hotplug/irq fixes (Matt)
 
 - PSR fixes and improvements (José)
 
 - DRM_I915_GEM_MMAP_OFFSET ioctl (Abdiel)
 
 - Static analysis fixes (Colin Ian King)
 
 - Register sysctl path globally (Venkata Sandeep Dhanalakota)
 
 - Introduce new macros for tracing (Venkata Sandeep Dhanalakota)
 
 - Migrate gt towards intel_uncore_read/write (Andi)
 
 - Add rps frequency translation helpers (Andi)
 
 - Fix TGL transcoder clock off sequence (José)
 
 - Fix TGL port A audio (Kai Vehmanen)
 
 - TGL render decompression (DK)
 
 - GEM/GT improvements and fixes across the board (Chris)
 
 - Couple of backmerges (Jani)
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEFWWmW3ewYy4RJOWc05gHnSar7m8FAl4A93AACgkQ05gHnSar
 7m9i8A/+PbXqhRLoloiLGCfyBvu7ZbUhOAuJEAMB3fgJ02Gw7z1xGwWTZkUSO9ve
 aRXG1CRB2gBpFFuOkzDYRbkcpHjjaNjTsSHXNCefyc9Q+pCCrgc20AaDz7g0Kfzy
 fkZAxXWFZaLfj7fx09b4vSqJGRrryPFskIsexxpdYGVxxxf0dG5UyD2ZvW3KVgGd
 wnoHGrWUJ8+Qk0XlAbhvReMGwVz843MsSNtefb9G9ObRpNx82xjL3aI/ZCQ4nzfo
 lyxJZDkAnc0YEZ+eLGSFW1ic/B+dT83OG0zFf0ozB2jzD0YkpS1YnIVynxudon5H
 4zvG6s/7PixQPGk5HoViVa6xNqmlKukMJFy/AnwZ5IHamawFdDRa+U0hCCfctaqi
 Jw97Hac3G8+BlLOshW7pey6lqw23EAFasqrEMiEJqH14Pf1UYA3hoWZ9NgDNluyr
 KAEgMRnMa7aQaXFpjb4EnvFKNXLoJgim2dD1Z+wxZp8yq3KuY45i9qtCglcJfSDa
 Vxb+pymIEfwQ9wAjRqD72GaP2FIe4flv4YtIsHvC2KWLWyCz58i2pFjA5q3nqlx+
 noEcBNUR8QZSk+1BGwoM3EfoXyG+tmUAIyzNbNCkaYekgIMFteQibAImnQ1IniyW
 u9qr0PjiK8uS8bRIWzIpwyHqMhEMilPkxOkBTysS1QvAdHwJnGI=
 =aFQh
 -----END PGP SIGNATURE-----

Merge tag 'drm-intel-next-2019-12-23' of git://anongit.freedesktop.org/drm/drm-intel into drm-next

i915 features for v5.6:

- Separate hardware and uapi state (Maarten)

- Expose a number of sprite and plane formats (Ville)

- DDC symlink in HDMI connector sysfs directory (Andrzej Pietrasiewicz)

- Improve obj->mm.lock nesting lock annotation (Daniel)
  (Includes lockdep changes)

- Selftest improvements across the board (Chris)

- ICL/TGL VDSC support on DSI (Jani, Vandita)

- TGL DSB fixes (Animesh, Lucas, Tvrtko)

- VBT parsing improvements and fixes (Lucas, Matt, José, Jani, Dan Carpenter)

- Fix LPSS vs. PMIC PWM backlight use on BYT/CHT (Hans)
  (Includes ACPI+MFD changes)

- Display state, crtc, plane code refactoring (Ville)

- Set opregion chpd value to indicate the driver handles hotplug (Hans de Goede)

- DSI updates and fixes, TGL pipe D support, port mapping (José, Jani, Vandita)

- Make HDCP 2.2 support cover CFL (Juston Li)

- Fix CML PCI IDs and ULT (Shawn Lee)

- CMP-V PCH fix (Imre)

- TGL: Add another TGL PCH ID (James)

- EHL/JSL: Add new PCI IDs (James)

- Rename pipe update tracepoints (Ville)

- Fix FBC on GLK+ (Ville)

- GuC fixes and improvements (Daniele, Don Hiatt, Stuart Summers, Matthew Brost)

- Display debugfs improvements (Ville)

- Hotplug/irq fixes (Matt)

- PSR fixes and improvements (José)

- DRM_I915_GEM_MMAP_OFFSET ioctl (Abdiel)

- Static analysis fixes (Colin Ian King)

- Register sysctl path globally (Venkata Sandeep Dhanalakota)

- Introduce new macros for tracing (Venkata Sandeep Dhanalakota)

- Migrate gt towards intel_uncore_read/write (Andi)

- Add rps frequency translation helpers (Andi)

- Fix TGL transcoder clock off sequence (José)

- Fix TGL port A audio (Kai Vehmanen)

- TGL render decompression (DK)

- GEM/GT improvements and fixes across the board (Chris)

- Couple of backmerges (Jani)

Signed-off-by: Dave Airlie <airlied@redhat.com>

# gpg: Signature made Tue 24 Dec 2019 03:20:48 AM AEST
# gpg:                using RSA key D398079D26ABEE6F
# gpg: Good signature from "Jani Nikula <jani.nikula@intel.com>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg:          There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 1565 A65B 77B0 632E 1124  E59C D398 079D 26AB EE6F

# Conflicts:
#	drivers/gpu/drm/i915/display/intel_fbc.c
#	drivers/gpu/drm/i915/gt/intel_lrc.c
#	drivers/gpu/drm/i915/i915_gem.c
From: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/87lfr3rkry.fsf@intel.com
This commit is contained in:
Dave Airlie 2019-12-27 15:25:04 +10:00
Родитель 5f773e551a 3446c63a0f
Коммит 3ae3271443
240 изменённых файлов: 12478 добавлений и 7894 удалений

Просмотреть файл

@ -466,9 +466,6 @@ GuC-based command submission
.. kernel-doc:: drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
:doc: GuC-based command submission
.. kernel-doc:: drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
:internal:
HuC
---
.. kernel-doc:: drivers/gpu/drm/i915/gt/uc/intel_huc.c

Просмотреть файл

@ -69,10 +69,6 @@ ACPI_MODULE_NAME("acpi_lpss");
#define LPSS_SAVE_CTX BIT(4)
#define LPSS_NO_D3_DELAY BIT(5)
/* Crystal Cove PMIC shares same ACPI ID between different platforms */
#define BYT_CRC_HRV 2
#define CHT_CRC_HRV 3
struct lpss_private_data;
struct lpss_device_desc {
@ -158,7 +154,7 @@ static void lpss_deassert_reset(struct lpss_private_data *pdata)
*/
static struct pwm_lookup byt_pwm_lookup[] = {
PWM_LOOKUP_WITH_MODULE("80860F09:00", 0, "0000:00:02.0",
"pwm_backlight", 0, PWM_POLARITY_NORMAL,
"pwm_soc_backlight", 0, PWM_POLARITY_NORMAL,
"pwm-lpss-platform"),
};
@ -170,8 +166,7 @@ static void byt_pwm_setup(struct lpss_private_data *pdata)
if (!adev->pnp.unique_id || strcmp(adev->pnp.unique_id, "1"))
return;
if (!acpi_dev_present("INT33FD", NULL, BYT_CRC_HRV))
pwm_add_table(byt_pwm_lookup, ARRAY_SIZE(byt_pwm_lookup));
pwm_add_table(byt_pwm_lookup, ARRAY_SIZE(byt_pwm_lookup));
}
#define LPSS_I2C_ENABLE 0x6c
@ -204,7 +199,7 @@ static void byt_i2c_setup(struct lpss_private_data *pdata)
/* BSW PWM used for backlight control by the i915 driver */
static struct pwm_lookup bsw_pwm_lookup[] = {
PWM_LOOKUP_WITH_MODULE("80862288:00", 0, "0000:00:02.0",
"pwm_backlight", 0, PWM_POLARITY_NORMAL,
"pwm_soc_backlight", 0, PWM_POLARITY_NORMAL,
"pwm-lpss-platform"),
};

Просмотреть файл

@ -54,6 +54,9 @@ config DRM_DEBUG_MM
If in doubt, say "N".
config DRM_EXPORT_FOR_TESTS
bool
config DRM_DEBUG_SELFTEST
tristate "kselftests for DRM"
depends on DRM
@ -61,6 +64,7 @@ config DRM_DEBUG_SELFTEST
select PRIME_NUMBERS
select DRM_LIB_RANDOM
select DRM_KMS_HELPER
select DRM_EXPORT_FOR_TESTS if m
default n
help
This option provides kernel modules that can be used to run

Просмотреть файл

@ -57,6 +57,22 @@
* for these functions.
*/
/**
* __drm_atomic_helper_crtc_state_reset - reset the CRTC state
* @crtc_state: atomic CRTC state, must not be NULL
* @crtc: CRTC object, must not be NULL
*
* Initializes the newly allocated @crtc_state with default
* values. This is useful for drivers that subclass the CRTC state.
*/
void
__drm_atomic_helper_crtc_state_reset(struct drm_crtc_state *crtc_state,
struct drm_crtc *crtc)
{
crtc_state->crtc = crtc;
}
EXPORT_SYMBOL(__drm_atomic_helper_crtc_state_reset);
/**
* __drm_atomic_helper_crtc_reset - reset state on CRTC
* @crtc: drm CRTC
@ -74,7 +90,7 @@ __drm_atomic_helper_crtc_reset(struct drm_crtc *crtc,
struct drm_crtc_state *crtc_state)
{
if (crtc_state)
crtc_state->crtc = crtc;
__drm_atomic_helper_crtc_state_reset(crtc_state, crtc);
crtc->state = crtc_state;
}
@ -212,23 +228,43 @@ void drm_atomic_helper_crtc_destroy_state(struct drm_crtc *crtc,
EXPORT_SYMBOL(drm_atomic_helper_crtc_destroy_state);
/**
* __drm_atomic_helper_plane_reset - resets planes state to default values
* __drm_atomic_helper_plane_state_reset - resets plane state to default values
* @plane_state: atomic plane state, must not be NULL
* @plane: plane object, must not be NULL
* @state: atomic plane state, must not be NULL
*
* Initializes plane state to default. This is useful for drivers that subclass
* the plane state.
* Initializes the newly allocated @plane_state with default
* values. This is useful for drivers that subclass the CRTC state.
*/
void __drm_atomic_helper_plane_state_reset(struct drm_plane_state *plane_state,
struct drm_plane *plane)
{
plane_state->plane = plane;
plane_state->rotation = DRM_MODE_ROTATE_0;
plane_state->alpha = DRM_BLEND_ALPHA_OPAQUE;
plane_state->pixel_blend_mode = DRM_MODE_BLEND_PREMULTI;
}
EXPORT_SYMBOL(__drm_atomic_helper_plane_state_reset);
/**
* __drm_atomic_helper_plane_reset - reset state on plane
* @plane: drm plane
* @plane_state: plane state to assign
*
* Initializes the newly allocated @plane_state and assigns it to
* the &drm_crtc->state pointer of @plane, usually required when
* initializing the drivers or when called from the &drm_plane_funcs.reset
* hook.
*
* This is useful for drivers that subclass the plane state.
*/
void __drm_atomic_helper_plane_reset(struct drm_plane *plane,
struct drm_plane_state *state)
struct drm_plane_state *plane_state)
{
state->plane = plane;
state->rotation = DRM_MODE_ROTATE_0;
if (plane_state)
__drm_atomic_helper_plane_state_reset(plane_state, plane);
state->alpha = DRM_BLEND_ALPHA_OPAQUE;
state->pixel_blend_mode = DRM_MODE_BLEND_PREMULTI;
plane->state = state;
plane->state = plane_state;
}
EXPORT_SYMBOL(__drm_atomic_helper_plane_reset);
@ -335,6 +371,22 @@ void drm_atomic_helper_plane_destroy_state(struct drm_plane *plane,
}
EXPORT_SYMBOL(drm_atomic_helper_plane_destroy_state);
/**
* __drm_atomic_helper_connector_state_reset - reset the connector state
* @conn_state: atomic connector state, must not be NULL
* @connector: connectotr object, must not be NULL
*
* Initializes the newly allocated @conn_state with default
* values. This is useful for drivers that subclass the connector state.
*/
void
__drm_atomic_helper_connector_state_reset(struct drm_connector_state *conn_state,
struct drm_connector *connector)
{
conn_state->connector = connector;
}
EXPORT_SYMBOL(__drm_atomic_helper_connector_state_reset);
/**
* __drm_atomic_helper_connector_reset - reset state on connector
* @connector: drm connector
@ -352,7 +404,7 @@ __drm_atomic_helper_connector_reset(struct drm_connector *connector,
struct drm_connector_state *conn_state)
{
if (conn_state)
conn_state->connector = connector;
__drm_atomic_helper_connector_state_reset(conn_state, connector);
connector->state = conn_state;
}

Просмотреть файл

@ -31,7 +31,9 @@
* OTHER DEALINGS IN THE SOFTWARE.
*/
#include <linux/anon_inodes.h>
#include <linux/dma-fence.h>
#include <linux/file.h>
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/poll.h>
@ -754,3 +756,43 @@ void drm_send_event(struct drm_device *dev, struct drm_pending_event *e)
spin_unlock_irqrestore(&dev->event_lock, irqflags);
}
EXPORT_SYMBOL(drm_send_event);
/**
* mock_drm_getfile - Create a new struct file for the drm device
* @minor: drm minor to wrap (e.g. #drm_device.primary)
* @flags: file creation mode (O_RDWR etc)
*
* This create a new struct file that wraps a DRM file context around a
* DRM minor. This mimicks userspace opening e.g. /dev/dri/card0, but without
* invoking userspace. The struct file may be operated on using its f_op
* (the drm_device.driver.fops) to mimick userspace operations, or be supplied
* to userspace facing functions as an internal/anonymous client.
*
* RETURNS:
* Pointer to newly created struct file, ERR_PTR on failure.
*/
struct file *mock_drm_getfile(struct drm_minor *minor, unsigned int flags)
{
struct drm_device *dev = minor->dev;
struct drm_file *priv;
struct file *file;
priv = drm_file_alloc(minor);
if (IS_ERR(priv))
return ERR_CAST(priv);
file = anon_inode_getfile("drm", dev->driver->fops, priv, flags);
if (IS_ERR(file)) {
drm_file_free(priv);
return file;
}
/* Everyone shares a single global address space */
file->f_mapping = dev->anon_inode->i_mapping;
drm_dev_get(dev);
priv->filp = file;
return file;
}
EXPORT_SYMBOL_FOR_TESTS_ONLY(mock_drm_getfile);

Просмотреть файл

@ -27,6 +27,7 @@ config DRM_I915_DEBUG
select X86_MSR # used by igt/pm_rpm
select DRM_VGEM # used by igt/prime_vgem (dmabuf interop checks)
select DRM_DEBUG_MM if DRM=y
select DRM_EXPORT_FOR_TESTS if m
select DRM_DEBUG_SELFTEST
select DMABUF_SELFTESTS
select SW_SYNC # signaling validation framework (igt/syncobj*)
@ -149,6 +150,7 @@ config DRM_I915_SELFTEST
bool "Enable selftests upon driver load"
depends on DRM_I915
default n
select DRM_EXPORT_FOR_TESTS if m
select FAULT_INJECTION
select PRIME_NUMBERS
help

Просмотреть файл

@ -75,6 +75,9 @@ i915-$(CONFIG_PERF_EVENTS) += i915_pmu.o
# "Graphics Technology" (aka we talk to the gpu)
obj-y += gt/
gt-y += \
gt/debugfs_engines.o \
gt/debugfs_gt.o \
gt/debugfs_gt_pm.o \
gt/intel_breadcrumbs.o \
gt/intel_context.o \
gt/intel_engine_cs.o \
@ -259,6 +262,7 @@ i915-$(CONFIG_DRM_I915_SELFTEST) += \
selftests/i915_selftest.o \
selftests/igt_flush_test.o \
selftests/igt_live_test.o \
selftests/igt_mmap.o \
selftests/igt_reset.o \
selftests/igt_spinner.o

Просмотреть файл

@ -34,6 +34,7 @@
#include "intel_ddi.h"
#include "intel_dsi.h"
#include "intel_panel.h"
#include "intel_vdsc.h"
static inline int header_credits_available(struct drm_i915_private *dev_priv,
enum transcoder dsi_trans)
@ -276,7 +277,7 @@ static void configure_dual_link_mode(struct intel_encoder *encoder,
if (intel_dsi->dual_link == DSI_DUAL_LINK_FRONT_BACK) {
const struct drm_display_mode *adjusted_mode =
&pipe_config->base.adjusted_mode;
&pipe_config->hw.adjusted_mode;
u32 dss_ctl2;
u16 hactive = adjusted_mode->crtc_hdisplay;
u16 dl_buffer_depth;
@ -301,18 +302,31 @@ static void configure_dual_link_mode(struct intel_encoder *encoder,
I915_WRITE(DSS_CTL1, dss_ctl1);
}
static void gen11_dsi_program_esc_clk_div(struct intel_encoder *encoder)
/* aka DSI 8X clock */
static int afe_clk(struct intel_encoder *encoder,
const struct intel_crtc_state *crtc_state)
{
struct intel_dsi *intel_dsi = enc_to_intel_dsi(&encoder->base);
int bpp;
if (crtc_state->dsc.compression_enable)
bpp = crtc_state->dsc.compressed_bpp;
else
bpp = mipi_dsi_pixel_format_to_bpp(intel_dsi->pixel_format);
return DIV_ROUND_CLOSEST(intel_dsi->pclk * bpp, intel_dsi->lane_count);
}
static void gen11_dsi_program_esc_clk_div(struct intel_encoder *encoder,
const struct intel_crtc_state *crtc_state)
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_dsi *intel_dsi = enc_to_intel_dsi(&encoder->base);
enum port port;
u32 bpp = mipi_dsi_pixel_format_to_bpp(intel_dsi->pixel_format);
u32 afe_clk_khz; /* 8X Clock */
int afe_clk_khz;
u32 esc_clk_div_m;
afe_clk_khz = DIV_ROUND_CLOSEST(intel_dsi->pclk * bpp,
intel_dsi->lane_count);
afe_clk_khz = afe_clk(encoder, crtc_state);
esc_clk_div_m = DIV_ROUND_UP(afe_clk_khz, DSI_MAX_ESC_CLK);
for_each_dsi_port(port, intel_dsi->ports) {
@ -490,7 +504,9 @@ static void gen11_dsi_enable_ddi_buffer(struct intel_encoder *encoder)
}
}
static void gen11_dsi_setup_dphy_timings(struct intel_encoder *encoder)
static void
gen11_dsi_setup_dphy_timings(struct intel_encoder *encoder,
const struct intel_crtc_state *crtc_state)
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_dsi *intel_dsi = enc_to_intel_dsi(&encoder->base);
@ -531,7 +547,7 @@ static void gen11_dsi_setup_dphy_timings(struct intel_encoder *encoder)
* leave all fields at HW default values.
*/
if (IS_GEN(dev_priv, 11)) {
if (intel_dsi_bitrate(intel_dsi) <= 800000) {
if (afe_clk(encoder, crtc_state) <= 800000) {
for_each_dsi_port(port, intel_dsi->ports) {
tmp = I915_READ(DPHY_TA_TIMING_PARAM(port));
tmp &= ~TA_SURE_MASK;
@ -625,7 +641,7 @@ gen11_dsi_configure_transcoder(struct intel_encoder *encoder,
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_dsi *intel_dsi = enc_to_intel_dsi(&encoder->base);
struct intel_crtc *intel_crtc = to_intel_crtc(pipe_config->base.crtc);
struct intel_crtc *intel_crtc = to_intel_crtc(pipe_config->uapi.crtc);
enum pipe pipe = intel_crtc->pipe;
u32 tmp;
enum port port;
@ -641,7 +657,7 @@ gen11_dsi_configure_transcoder(struct intel_encoder *encoder,
tmp |= EOTP_DISABLED;
/* enable link calibration if freq > 1.5Gbps */
if (intel_dsi_bitrate(intel_dsi) >= 1500 * 1000) {
if (afe_clk(encoder, pipe_config) >= 1500 * 1000) {
tmp &= ~LINK_CALIBRATION_MASK;
tmp |= CALIBRATION_ENABLED_INITIAL_ONLY;
}
@ -667,22 +683,26 @@ gen11_dsi_configure_transcoder(struct intel_encoder *encoder,
/* select pixel format */
tmp &= ~PIX_FMT_MASK;
switch (intel_dsi->pixel_format) {
default:
MISSING_CASE(intel_dsi->pixel_format);
/* fallthrough */
case MIPI_DSI_FMT_RGB565:
tmp |= PIX_FMT_RGB565;
break;
case MIPI_DSI_FMT_RGB666_PACKED:
tmp |= PIX_FMT_RGB666_PACKED;
break;
case MIPI_DSI_FMT_RGB666:
tmp |= PIX_FMT_RGB666_LOOSE;
break;
case MIPI_DSI_FMT_RGB888:
tmp |= PIX_FMT_RGB888;
break;
if (pipe_config->dsc.compression_enable) {
tmp |= PIX_FMT_COMPRESSED;
} else {
switch (intel_dsi->pixel_format) {
default:
MISSING_CASE(intel_dsi->pixel_format);
/* fallthrough */
case MIPI_DSI_FMT_RGB565:
tmp |= PIX_FMT_RGB565;
break;
case MIPI_DSI_FMT_RGB666_PACKED:
tmp |= PIX_FMT_RGB666_PACKED;
break;
case MIPI_DSI_FMT_RGB666:
tmp |= PIX_FMT_RGB666_LOOSE;
break;
case MIPI_DSI_FMT_RGB888:
tmp |= PIX_FMT_RGB888;
break;
}
}
if (INTEL_GEN(dev_priv) >= 12) {
@ -745,6 +765,9 @@ gen11_dsi_configure_transcoder(struct intel_encoder *encoder,
case PIPE_C:
tmp |= TRANS_DDI_EDP_INPUT_C_ONOFF;
break;
case PIPE_D:
tmp |= TRANS_DDI_EDP_INPUT_D_ONOFF;
break;
}
/* enable DDI buffer */
@ -763,12 +786,12 @@ gen11_dsi_configure_transcoder(struct intel_encoder *encoder,
static void
gen11_dsi_set_transcoder_timings(struct intel_encoder *encoder,
const struct intel_crtc_state *pipe_config)
const struct intel_crtc_state *crtc_state)
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_dsi *intel_dsi = enc_to_intel_dsi(&encoder->base);
const struct drm_display_mode *adjusted_mode =
&pipe_config->base.adjusted_mode;
&crtc_state->hw.adjusted_mode;
enum port port;
enum transcoder dsi_trans;
/* horizontal timings */
@ -776,11 +799,25 @@ gen11_dsi_set_transcoder_timings(struct intel_encoder *encoder,
u16 hback_porch;
/* vertical timings */
u16 vtotal, vactive, vsync_start, vsync_end, vsync_shift;
int mul = 1, div = 1;
/*
* Adjust horizontal timings (htotal, hsync_start, hsync_end) to account
* for slower link speed if DSC is enabled.
*
* The compression frequency ratio is the ratio between compressed and
* non-compressed link speeds, and simplifies down to the ratio between
* compressed and non-compressed bpp.
*/
if (crtc_state->dsc.compression_enable) {
mul = crtc_state->dsc.compressed_bpp;
div = mipi_dsi_pixel_format_to_bpp(intel_dsi->pixel_format);
}
hactive = adjusted_mode->crtc_hdisplay;
htotal = adjusted_mode->crtc_htotal;
hsync_start = adjusted_mode->crtc_hsync_start;
hsync_end = adjusted_mode->crtc_hsync_end;
htotal = DIV_ROUND_UP(adjusted_mode->crtc_htotal * mul, div);
hsync_start = DIV_ROUND_UP(adjusted_mode->crtc_hsync_start * mul, div);
hsync_end = DIV_ROUND_UP(adjusted_mode->crtc_hsync_end * mul, div);
hsync_size = hsync_end - hsync_start;
hback_porch = (adjusted_mode->crtc_htotal -
adjusted_mode->crtc_hsync_end);
@ -904,7 +941,8 @@ static void gen11_dsi_enable_transcoder(struct intel_encoder *encoder)
}
}
static void gen11_dsi_setup_timeouts(struct intel_encoder *encoder)
static void gen11_dsi_setup_timeouts(struct intel_encoder *encoder,
const struct intel_crtc_state *crtc_state)
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_dsi *intel_dsi = enc_to_intel_dsi(&encoder->base);
@ -919,7 +957,7 @@ static void gen11_dsi_setup_timeouts(struct intel_encoder *encoder)
* TIME_NS = (BYTE_CLK_COUNT * 8 * 10^6)/ Bitrate
* ESCAPE_CLK_COUNT = TIME_NS/ESC_CLK_NS
*/
divisor = intel_dsi_tlpx_ns(intel_dsi) * intel_dsi_bitrate(intel_dsi) * 1000;
divisor = intel_dsi_tlpx_ns(intel_dsi) * afe_clk(encoder, crtc_state) * 1000;
mul = 8 * 1000000;
hs_tx_timeout = DIV_ROUND_UP(intel_dsi->hs_tx_timeout * mul,
divisor);
@ -955,7 +993,7 @@ static void gen11_dsi_setup_timeouts(struct intel_encoder *encoder)
static void
gen11_dsi_enable_port_and_phy(struct intel_encoder *encoder,
const struct intel_crtc_state *pipe_config)
const struct intel_crtc_state *crtc_state)
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
@ -972,13 +1010,13 @@ gen11_dsi_enable_port_and_phy(struct intel_encoder *encoder,
gen11_dsi_enable_ddi_buffer(encoder);
/* setup D-PHY timings */
gen11_dsi_setup_dphy_timings(encoder);
gen11_dsi_setup_dphy_timings(encoder, crtc_state);
/* step 4h: setup DSI protocol timeouts */
gen11_dsi_setup_timeouts(encoder);
gen11_dsi_setup_timeouts(encoder, crtc_state);
/* Step (4h, 4i, 4j, 4k): Configure transcoder */
gen11_dsi_configure_transcoder(encoder, pipe_config);
gen11_dsi_configure_transcoder(encoder, crtc_state);
/* Step 4l: Gate DDI clocks */
if (IS_GEN(dev_priv, 11))
@ -1025,14 +1063,14 @@ static void gen11_dsi_powerup_panel(struct intel_encoder *encoder)
}
static void gen11_dsi_pre_pll_enable(struct intel_encoder *encoder,
const struct intel_crtc_state *pipe_config,
const struct intel_crtc_state *crtc_state,
const struct drm_connector_state *conn_state)
{
/* step2: enable IO power */
gen11_dsi_enable_io_power(encoder);
/* step3: enable DSI PLL */
gen11_dsi_program_esc_clk_div(encoder);
gen11_dsi_program_esc_clk_div(encoder, crtc_state);
}
static void gen11_dsi_pre_enable(struct intel_encoder *encoder,
@ -1050,6 +1088,8 @@ static void gen11_dsi_pre_enable(struct intel_encoder *encoder,
/* step5: program and powerup panel */
gen11_dsi_powerup_panel(encoder);
intel_dsc_enable(encoder, pipe_config);
/* step6c: configure transcoder timings */
gen11_dsi_set_transcoder_timings(encoder, pipe_config);
@ -1211,12 +1251,42 @@ static void gen11_dsi_disable(struct intel_encoder *encoder,
gen11_dsi_disable_io_power(encoder);
}
static void gen11_dsi_post_disable(struct intel_encoder *encoder,
const struct intel_crtc_state *old_crtc_state,
const struct drm_connector_state *old_conn_state)
{
intel_crtc_vblank_off(old_crtc_state);
intel_dsc_disable(old_crtc_state);
skylake_scaler_disable(old_crtc_state);
}
static enum drm_mode_status gen11_dsi_mode_valid(struct drm_connector *connector,
struct drm_display_mode *mode)
{
/* FIXME: DSC? */
return intel_dsi_mode_valid(connector, mode);
}
static void gen11_dsi_get_timings(struct intel_encoder *encoder,
struct intel_crtc_state *pipe_config)
{
struct intel_dsi *intel_dsi = enc_to_intel_dsi(&encoder->base);
struct drm_display_mode *adjusted_mode =
&pipe_config->base.adjusted_mode;
&pipe_config->hw.adjusted_mode;
if (pipe_config->dsc.compressed_bpp) {
int div = pipe_config->dsc.compressed_bpp;
int mul = mipi_dsi_pixel_format_to_bpp(intel_dsi->pixel_format);
adjusted_mode->crtc_htotal =
DIV_ROUND_UP(adjusted_mode->crtc_htotal * mul, div);
adjusted_mode->crtc_hsync_start =
DIV_ROUND_UP(adjusted_mode->crtc_hsync_start * mul, div);
adjusted_mode->crtc_hsync_end =
DIV_ROUND_UP(adjusted_mode->crtc_hsync_end * mul, div);
}
if (intel_dsi->dual_link) {
adjusted_mode->crtc_hdisplay *= 2;
@ -1242,22 +1312,66 @@ static void gen11_dsi_get_config(struct intel_encoder *encoder,
struct intel_crtc_state *pipe_config)
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_crtc *crtc = to_intel_crtc(pipe_config->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(pipe_config->uapi.crtc);
struct intel_dsi *intel_dsi = enc_to_intel_dsi(&encoder->base);
intel_dsc_get_config(encoder, pipe_config);
/* FIXME: adapt icl_ddi_clock_get() for DSI and use that? */
pipe_config->port_clock =
cnl_calc_wrpll_link(dev_priv, &pipe_config->dpll_hw_state);
pipe_config->base.adjusted_mode.crtc_clock = intel_dsi->pclk;
pipe_config->hw.adjusted_mode.crtc_clock = intel_dsi->pclk;
if (intel_dsi->dual_link)
pipe_config->base.adjusted_mode.crtc_clock *= 2;
pipe_config->hw.adjusted_mode.crtc_clock *= 2;
gen11_dsi_get_timings(encoder, pipe_config);
pipe_config->output_types |= BIT(INTEL_OUTPUT_DSI);
pipe_config->pipe_bpp = bdw_get_pipemisc_bpp(crtc);
}
static int gen11_dsi_dsc_compute_config(struct intel_encoder *encoder,
struct intel_crtc_state *crtc_state)
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct drm_dsc_config *vdsc_cfg = &crtc_state->dsc.config;
int dsc_max_bpc = INTEL_GEN(dev_priv) >= 12 ? 12 : 10;
bool use_dsc;
int ret;
use_dsc = intel_bios_get_dsc_params(encoder, crtc_state, dsc_max_bpc);
if (!use_dsc)
return 0;
if (crtc_state->pipe_bpp < 8 * 3)
return -EINVAL;
/* FIXME: split only when necessary */
if (crtc_state->dsc.slice_count > 1)
crtc_state->dsc.dsc_split = true;
vdsc_cfg->convert_rgb = true;
ret = intel_dsc_compute_params(encoder, crtc_state);
if (ret)
return ret;
/* DSI specific sanity checks on the common code */
WARN_ON(vdsc_cfg->vbr_enable);
WARN_ON(vdsc_cfg->simple_422);
WARN_ON(vdsc_cfg->pic_width % vdsc_cfg->slice_width);
WARN_ON(vdsc_cfg->slice_height < 8);
WARN_ON(vdsc_cfg->pic_height % vdsc_cfg->slice_height);
ret = drm_dsc_compute_rc_parameters(vdsc_cfg);
if (ret)
return ret;
crtc_state->dsc.compression_enable = true;
return 0;
}
static int gen11_dsi_compute_config(struct intel_encoder *encoder,
struct intel_crtc_state *pipe_config,
struct drm_connector_state *conn_state)
@ -1265,11 +1379,11 @@ static int gen11_dsi_compute_config(struct intel_encoder *encoder,
struct intel_dsi *intel_dsi = container_of(encoder, struct intel_dsi,
base);
struct intel_connector *intel_connector = intel_dsi->attached_connector;
struct intel_crtc *crtc = to_intel_crtc(pipe_config->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(pipe_config->uapi.crtc);
const struct drm_display_mode *fixed_mode =
intel_connector->panel.fixed_mode;
struct drm_display_mode *adjusted_mode =
&pipe_config->base.adjusted_mode;
&pipe_config->hw.adjusted_mode;
pipe_config->output_format = INTEL_OUTPUT_FORMAT_RGB;
intel_fixed_panel_mode(fixed_mode, adjusted_mode);
@ -1283,8 +1397,17 @@ static int gen11_dsi_compute_config(struct intel_encoder *encoder,
else
pipe_config->cpu_transcoder = TRANSCODER_DSI_0;
if (intel_dsi->pixel_format == MIPI_DSI_FMT_RGB888)
pipe_config->pipe_bpp = 24;
else
pipe_config->pipe_bpp = 18;
pipe_config->clock_set = true;
pipe_config->port_clock = intel_dsi_bitrate(intel_dsi) / 5;
if (gen11_dsi_dsc_compute_config(encoder, pipe_config))
DRM_DEBUG_KMS("Attempting to use DSC failed\n");
pipe_config->port_clock = afe_clk(encoder, pipe_config) / 5;
return 0;
}
@ -1292,8 +1415,13 @@ static int gen11_dsi_compute_config(struct intel_encoder *encoder,
static void gen11_dsi_get_power_domains(struct intel_encoder *encoder,
struct intel_crtc_state *crtc_state)
{
get_dsi_io_power_domains(to_i915(encoder->base.dev),
enc_to_intel_dsi(&encoder->base));
struct drm_i915_private *i915 = to_i915(encoder->base.dev);
get_dsi_io_power_domains(i915, enc_to_intel_dsi(&encoder->base));
if (crtc_state->dsc.compression_enable)
intel_display_power_get(i915,
intel_dsc_power_domain(crtc_state));
}
static bool gen11_dsi_get_hw_state(struct intel_encoder *encoder,
@ -1325,6 +1453,9 @@ static bool gen11_dsi_get_hw_state(struct intel_encoder *encoder,
case TRANS_DDI_EDP_INPUT_C_ONOFF:
*pipe = PIPE_C;
break;
case TRANS_DDI_EDP_INPUT_D_ONOFF:
*pipe = PIPE_D;
break;
default:
DRM_ERROR("Invalid PIPE input\n");
goto out;
@ -1360,7 +1491,7 @@ static const struct drm_connector_funcs gen11_dsi_connector_funcs = {
static const struct drm_connector_helper_funcs gen11_dsi_connector_helper_funcs = {
.get_modes = intel_dsi_get_modes,
.mode_valid = intel_dsi_mode_valid,
.mode_valid = gen11_dsi_mode_valid,
.atomic_check = intel_digital_connector_atomic_check,
};
@ -1577,6 +1708,7 @@ void icl_dsi_init(struct drm_i915_private *dev_priv)
encoder->pre_pll_enable = gen11_dsi_pre_pll_enable;
encoder->pre_enable = gen11_dsi_pre_enable;
encoder->disable = gen11_dsi_disable;
encoder->post_disable = gen11_dsi_post_disable;
encoder->port = port;
encoder->get_config = gen11_dsi_get_config;
encoder->update_pipe = intel_panel_update_backlight;

Просмотреть файл

@ -186,13 +186,22 @@ intel_digital_connector_duplicate_state(struct drm_connector *connector)
struct drm_crtc_state *
intel_crtc_duplicate_state(struct drm_crtc *crtc)
{
const struct intel_crtc_state *old_crtc_state = to_intel_crtc_state(crtc->state);
struct intel_crtc_state *crtc_state;
crtc_state = kmemdup(crtc->state, sizeof(*crtc_state), GFP_KERNEL);
crtc_state = kmemdup(old_crtc_state, sizeof(*crtc_state), GFP_KERNEL);
if (!crtc_state)
return NULL;
__drm_atomic_helper_crtc_duplicate_state(crtc, &crtc_state->base);
__drm_atomic_helper_crtc_duplicate_state(crtc, &crtc_state->uapi);
/* copy color blobs */
if (crtc_state->hw.degamma_lut)
drm_property_blob_get(crtc_state->hw.degamma_lut);
if (crtc_state->hw.ctm)
drm_property_blob_get(crtc_state->hw.ctm);
if (crtc_state->hw.gamma_lut)
drm_property_blob_get(crtc_state->hw.gamma_lut);
crtc_state->update_pipe = false;
crtc_state->disable_lp_wm = false;
@ -205,7 +214,29 @@ intel_crtc_duplicate_state(struct drm_crtc *crtc)
crtc_state->fb_bits = 0;
crtc_state->update_planes = 0;
return &crtc_state->base;
return &crtc_state->uapi;
}
static void intel_crtc_put_color_blobs(struct intel_crtc_state *crtc_state)
{
drm_property_blob_put(crtc_state->hw.degamma_lut);
drm_property_blob_put(crtc_state->hw.gamma_lut);
drm_property_blob_put(crtc_state->hw.ctm);
}
void intel_crtc_free_hw_state(struct intel_crtc_state *crtc_state)
{
intel_crtc_put_color_blobs(crtc_state);
}
void intel_crtc_copy_color_blobs(struct intel_crtc_state *crtc_state)
{
drm_property_replace_blob(&crtc_state->hw.degamma_lut,
crtc_state->uapi.degamma_lut);
drm_property_replace_blob(&crtc_state->hw.gamma_lut,
crtc_state->uapi.gamma_lut);
drm_property_replace_blob(&crtc_state->hw.ctm,
crtc_state->uapi.ctm);
}
/**
@ -220,7 +251,11 @@ void
intel_crtc_destroy_state(struct drm_crtc *crtc,
struct drm_crtc_state *state)
{
drm_atomic_helper_crtc_destroy_state(crtc, state);
struct intel_crtc_state *crtc_state = to_intel_crtc_state(state);
__drm_atomic_helper_crtc_destroy_state(&crtc_state->uapi);
intel_crtc_free_hw_state(crtc_state);
kfree(crtc_state);
}
static void intel_atomic_setup_scaler(struct intel_crtc_scaler_state *scaler_state,
@ -249,10 +284,10 @@ static void intel_atomic_setup_scaler(struct intel_crtc_scaler_state *scaler_sta
return;
/* set scaler mode */
if (plane_state && plane_state->base.fb &&
plane_state->base.fb->format->is_yuv &&
plane_state->base.fb->format->num_planes > 1) {
struct intel_plane *plane = to_intel_plane(plane_state->base.plane);
if (plane_state && plane_state->hw.fb &&
plane_state->hw.fb->format->is_yuv &&
plane_state->hw.fb->format->num_planes > 1) {
struct intel_plane *plane = to_intel_plane(plane_state->uapi.plane);
if (IS_GEN(dev_priv, 9) &&
!IS_GEMINILAKE(dev_priv)) {
mode = SKL_PS_SCALER_MODE_NV12;
@ -319,7 +354,7 @@ int intel_atomic_setup_scalers(struct drm_i915_private *dev_priv,
struct intel_plane_state *plane_state = NULL;
struct intel_crtc_scaler_state *scaler_state =
&crtc_state->scaler_state;
struct drm_atomic_state *drm_state = crtc_state->base.state;
struct drm_atomic_state *drm_state = crtc_state->uapi.state;
struct intel_atomic_state *intel_state = to_intel_atomic_state(drm_state);
int num_scalers_need;
int i;

Просмотреть файл

@ -36,6 +36,8 @@ intel_digital_connector_duplicate_state(struct drm_connector *connector);
struct drm_crtc_state *intel_crtc_duplicate_state(struct drm_crtc *crtc);
void intel_crtc_destroy_state(struct drm_crtc *crtc,
struct drm_crtc_state *state);
void intel_crtc_free_hw_state(struct intel_crtc_state *crtc_state);
void intel_crtc_copy_color_blobs(struct intel_crtc_state *crtc_state);
struct drm_atomic_state *intel_atomic_state_alloc(struct drm_device *dev);
void intel_atomic_state_clear(struct drm_atomic_state *state);

Просмотреть файл

@ -41,6 +41,16 @@
#include "intel_pm.h"
#include "intel_sprite.h"
static void intel_plane_state_reset(struct intel_plane_state *plane_state,
struct intel_plane *plane)
{
memset(plane_state, 0, sizeof(*plane_state));
__drm_atomic_helper_plane_state_reset(&plane_state->uapi, &plane->base);
plane_state->scaler_id = -1;
}
struct intel_plane *intel_plane_alloc(void)
{
struct intel_plane_state *plane_state;
@ -56,8 +66,9 @@ struct intel_plane *intel_plane_alloc(void)
return ERR_PTR(-ENOMEM);
}
__drm_atomic_helper_plane_reset(&plane->base, &plane_state->base);
plane_state->scaler_id = -1;
intel_plane_state_reset(plane_state, plane);
plane->base.state = &plane_state->uapi;
return plane;
}
@ -80,22 +91,24 @@ void intel_plane_free(struct intel_plane *plane)
struct drm_plane_state *
intel_plane_duplicate_state(struct drm_plane *plane)
{
struct drm_plane_state *state;
struct intel_plane_state *intel_state;
intel_state = kmemdup(plane->state, sizeof(*intel_state), GFP_KERNEL);
intel_state = to_intel_plane_state(plane->state);
intel_state = kmemdup(intel_state, sizeof(*intel_state), GFP_KERNEL);
if (!intel_state)
return NULL;
state = &intel_state->base;
__drm_atomic_helper_plane_duplicate_state(plane, state);
__drm_atomic_helper_plane_duplicate_state(plane, &intel_state->uapi);
intel_state->vma = NULL;
intel_state->flags = 0;
return state;
/* add reference to fb */
if (intel_state->hw.fb)
drm_framebuffer_get(intel_state->hw.fb);
return &intel_state->uapi;
}
/**
@ -110,18 +123,22 @@ void
intel_plane_destroy_state(struct drm_plane *plane,
struct drm_plane_state *state)
{
WARN_ON(to_intel_plane_state(state)->vma);
struct intel_plane_state *plane_state = to_intel_plane_state(state);
WARN_ON(plane_state->vma);
drm_atomic_helper_plane_destroy_state(plane, state);
__drm_atomic_helper_plane_destroy_state(&plane_state->uapi);
if (plane_state->hw.fb)
drm_framebuffer_put(plane_state->hw.fb);
kfree(plane_state);
}
unsigned int intel_plane_data_rate(const struct intel_crtc_state *crtc_state,
const struct intel_plane_state *plane_state)
{
const struct drm_framebuffer *fb = plane_state->base.fb;
const struct drm_framebuffer *fb = plane_state->hw.fb;
unsigned int cpp;
if (!plane_state->base.visible)
if (!plane_state->uapi.visible)
return 0;
cpp = fb->format->cpp[0];
@ -144,10 +161,10 @@ bool intel_plane_calc_min_cdclk(struct intel_atomic_state *state,
struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
const struct intel_plane_state *plane_state =
intel_atomic_get_new_plane_state(state, plane);
struct intel_crtc *crtc = to_intel_crtc(plane_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(plane_state->hw.crtc);
struct intel_crtc_state *crtc_state;
if (!plane_state->base.visible || !plane->min_cdclk)
if (!plane_state->uapi.visible || !plane->min_cdclk)
return false;
crtc_state = intel_atomic_get_new_crtc_state(state, crtc);
@ -176,23 +193,52 @@ bool intel_plane_calc_min_cdclk(struct intel_atomic_state *state,
return false;
}
static void intel_plane_clear_hw_state(struct intel_plane_state *plane_state)
{
if (plane_state->hw.fb)
drm_framebuffer_put(plane_state->hw.fb);
memset(&plane_state->hw, 0, sizeof(plane_state->hw));
}
void intel_plane_copy_uapi_to_hw_state(struct intel_plane_state *plane_state,
const struct intel_plane_state *from_plane_state)
{
intel_plane_clear_hw_state(plane_state);
plane_state->hw.crtc = from_plane_state->uapi.crtc;
plane_state->hw.fb = from_plane_state->uapi.fb;
if (plane_state->hw.fb)
drm_framebuffer_get(plane_state->hw.fb);
plane_state->hw.alpha = from_plane_state->uapi.alpha;
plane_state->hw.pixel_blend_mode =
from_plane_state->uapi.pixel_blend_mode;
plane_state->hw.rotation = from_plane_state->uapi.rotation;
plane_state->hw.color_encoding = from_plane_state->uapi.color_encoding;
plane_state->hw.color_range = from_plane_state->uapi.color_range;
}
int intel_plane_atomic_check_with_state(const struct intel_crtc_state *old_crtc_state,
struct intel_crtc_state *new_crtc_state,
const struct intel_plane_state *old_plane_state,
struct intel_plane_state *new_plane_state)
{
struct intel_plane *plane = to_intel_plane(new_plane_state->base.plane);
const struct drm_framebuffer *fb = new_plane_state->base.fb;
struct intel_plane *plane = to_intel_plane(new_plane_state->uapi.plane);
const struct drm_framebuffer *fb;
int ret;
intel_plane_copy_uapi_to_hw_state(new_plane_state, new_plane_state);
fb = new_plane_state->hw.fb;
new_crtc_state->active_planes &= ~BIT(plane->id);
new_crtc_state->nv12_planes &= ~BIT(plane->id);
new_crtc_state->c8_planes &= ~BIT(plane->id);
new_crtc_state->data_rate[plane->id] = 0;
new_crtc_state->min_cdclk[plane->id] = 0;
new_plane_state->base.visible = false;
new_plane_state->uapi.visible = false;
if (!new_plane_state->base.crtc && !old_plane_state->base.crtc)
if (!new_plane_state->hw.crtc && !old_plane_state->hw.crtc)
return 0;
ret = plane->check_plane(new_crtc_state, new_plane_state);
@ -200,18 +246,18 @@ int intel_plane_atomic_check_with_state(const struct intel_crtc_state *old_crtc_
return ret;
/* FIXME pre-g4x don't work like this */
if (new_plane_state->base.visible)
if (new_plane_state->uapi.visible)
new_crtc_state->active_planes |= BIT(plane->id);
if (new_plane_state->base.visible &&
drm_format_info_is_yuv_semiplanar(fb->format))
if (new_plane_state->uapi.visible &&
intel_format_info_is_yuv_semiplanar(fb->format, fb->modifier))
new_crtc_state->nv12_planes |= BIT(plane->id);
if (new_plane_state->base.visible &&
if (new_plane_state->uapi.visible &&
fb->format->format == DRM_FORMAT_C8)
new_crtc_state->c8_planes |= BIT(plane->id);
if (new_plane_state->base.visible || old_plane_state->base.visible)
if (new_plane_state->uapi.visible || old_plane_state->uapi.visible)
new_crtc_state->update_planes |= BIT(plane->id);
new_crtc_state->data_rate[plane->id] =
@ -225,11 +271,11 @@ static struct intel_crtc *
get_crtc_from_states(const struct intel_plane_state *old_plane_state,
const struct intel_plane_state *new_plane_state)
{
if (new_plane_state->base.crtc)
return to_intel_crtc(new_plane_state->base.crtc);
if (new_plane_state->uapi.crtc)
return to_intel_crtc(new_plane_state->uapi.crtc);
if (old_plane_state->base.crtc)
return to_intel_crtc(old_plane_state->base.crtc);
if (old_plane_state->uapi.crtc)
return to_intel_crtc(old_plane_state->uapi.crtc);
return NULL;
}
@ -246,7 +292,7 @@ int intel_plane_atomic_check(struct intel_atomic_state *state,
const struct intel_crtc_state *old_crtc_state;
struct intel_crtc_state *new_crtc_state;
new_plane_state->base.visible = false;
new_plane_state->uapi.visible = false;
if (!crtc)
return 0;
@ -307,26 +353,16 @@ void intel_update_plane(struct intel_plane *plane,
const struct intel_crtc_state *crtc_state,
const struct intel_plane_state *plane_state)
{
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
trace_intel_update_plane(&plane->base, crtc);
plane->update_plane(plane, crtc_state, plane_state);
}
void intel_update_slave(struct intel_plane *plane,
const struct intel_crtc_state *crtc_state,
const struct intel_plane_state *plane_state)
{
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
trace_intel_update_plane(&plane->base, crtc);
plane->update_slave(plane, crtc_state, plane_state);
}
void intel_disable_plane(struct intel_plane *plane,
const struct intel_crtc_state *crtc_state)
{
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
trace_intel_disable_plane(&plane->base, crtc);
plane->disable_plane(plane, crtc_state);
@ -355,25 +391,9 @@ void skl_update_planes_on_crtc(struct intel_atomic_state *state,
struct intel_plane_state *new_plane_state =
intel_atomic_get_new_plane_state(state, plane);
if (new_plane_state->base.visible) {
if (new_plane_state->uapi.visible ||
new_plane_state->planar_slave) {
intel_update_plane(plane, new_crtc_state, new_plane_state);
} else if (new_plane_state->planar_slave) {
struct intel_plane *master =
new_plane_state->planar_linked_plane;
/*
* We update the slave plane from this function because
* programming it from the master plane's update_plane
* callback runs into issues when the Y plane is
* reassigned, disabled or used by a different plane.
*
* The slave plane is updated with the master plane's
* plane_state.
*/
new_plane_state =
intel_atomic_get_new_plane_state(state, master);
intel_update_slave(plane, new_crtc_state, new_plane_state);
} else {
intel_disable_plane(plane, new_crtc_state);
}
@ -395,7 +415,7 @@ void i9xx_update_planes_on_crtc(struct intel_atomic_state *state,
!(update_mask & BIT(plane->id)))
continue;
if (new_plane_state->base.visible)
if (new_plane_state->uapi.visible)
intel_update_plane(plane, new_crtc_state, new_plane_state);
else
intel_disable_plane(plane, new_crtc_state);

Просмотреть файл

@ -20,12 +20,11 @@ extern const struct drm_plane_helper_funcs intel_plane_helper_funcs;
unsigned int intel_plane_data_rate(const struct intel_crtc_state *crtc_state,
const struct intel_plane_state *plane_state);
void intel_plane_copy_uapi_to_hw_state(struct intel_plane_state *plane_state,
const struct intel_plane_state *from_plane_state);
void intel_update_plane(struct intel_plane *plane,
const struct intel_crtc_state *crtc_state,
const struct intel_plane_state *plane_state);
void intel_update_slave(struct intel_plane *plane,
const struct intel_crtc_state *crtc_state,
const struct intel_plane_state *plane_state);
void intel_disable_plane(struct intel_plane *plane,
const struct intel_crtc_state *crtc_state);
struct intel_plane *intel_plane_alloc(void);

Просмотреть файл

@ -234,7 +234,7 @@ static const struct hdmi_aud_ncts hdmi_aud_ncts_36bpp[] = {
static u32 audio_config_hdmi_pixel_clock(const struct intel_crtc_state *crtc_state)
{
const struct drm_display_mode *adjusted_mode =
&crtc_state->base.adjusted_mode;
&crtc_state->hw.adjusted_mode;
int i;
for (i = 0; i < ARRAY_SIZE(hdmi_audio_clock); i++) {
@ -555,7 +555,7 @@ static void ilk_audio_codec_disable(struct intel_encoder *encoder,
const struct drm_connector_state *old_conn_state)
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_crtc *crtc = to_intel_crtc(old_crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(old_crtc_state->uapi.crtc);
enum pipe pipe = crtc->pipe;
enum port port = encoder->port;
u32 tmp, eldv;
@ -602,7 +602,7 @@ static void ilk_audio_codec_enable(struct intel_encoder *encoder,
const struct drm_connector_state *conn_state)
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
struct drm_connector *connector = conn_state->connector;
enum pipe pipe = crtc->pipe;
enum port port = encoder->port;
@ -692,10 +692,10 @@ void intel_audio_codec_enable(struct intel_encoder *encoder,
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct i915_audio_component *acomp = dev_priv->audio_component;
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
struct drm_connector *connector = conn_state->connector;
const struct drm_display_mode *adjusted_mode =
&crtc_state->base.adjusted_mode;
&crtc_state->hw.adjusted_mode;
enum port port = encoder->port;
enum pipe pipe = crtc->pipe;
@ -753,7 +753,7 @@ void intel_audio_codec_disable(struct intel_encoder *encoder,
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct i915_audio_component *acomp = dev_priv->audio_component;
struct intel_crtc *crtc = to_intel_crtc(old_crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(old_crtc_state->uapi.crtc);
enum port port = encoder->port;
enum pipe pipe = crtc->pipe;

Просмотреть файл

@ -29,6 +29,7 @@
#include <drm/i915_drm.h>
#include "display/intel_display.h"
#include "display/intel_display_types.h"
#include "display/intel_gmbus.h"
#include "i915_drv.h"
@ -58,6 +59,13 @@
* that.
*/
/* Wrapper for VBT child device config */
struct display_device_data {
struct child_device_config child;
struct dsc_compression_parameters_entry *dsc;
struct list_head node;
};
#define SLAVE_ADDR1 0x70
#define SLAVE_ADDR2 0x72
@ -202,17 +210,12 @@ get_lvds_fp_timing(const struct bdb_header *bdb,
return (const struct lvds_fp_timing *)((const u8 *)bdb + ofs);
}
/* Try to find integrated panel data */
/* Parse general panel options */
static void
parse_lfp_panel_data(struct drm_i915_private *dev_priv,
const struct bdb_header *bdb)
parse_panel_options(struct drm_i915_private *dev_priv,
const struct bdb_header *bdb)
{
const struct bdb_lvds_options *lvds_options;
const struct bdb_lvds_lfp_data *lvds_lfp_data;
const struct bdb_lvds_lfp_data_ptrs *lvds_lfp_data_ptrs;
const struct lvds_dvo_timing *panel_dvo_timing;
const struct lvds_fp_timing *fp_timing;
struct drm_display_mode *panel_fixed_mode;
int panel_type;
int drrs_mode;
int ret;
@ -261,6 +264,19 @@ parse_lfp_panel_data(struct drm_i915_private *dev_priv,
DRM_DEBUG_KMS("DRRS not supported (VBT input)\n");
break;
}
}
/* Try to find integrated panel timing data */
static void
parse_lfp_panel_dtd(struct drm_i915_private *dev_priv,
const struct bdb_header *bdb)
{
const struct bdb_lvds_lfp_data *lvds_lfp_data;
const struct bdb_lvds_lfp_data_ptrs *lvds_lfp_data_ptrs;
const struct lvds_dvo_timing *panel_dvo_timing;
const struct lvds_fp_timing *fp_timing;
struct drm_display_mode *panel_fixed_mode;
int panel_type = dev_priv->vbt.panel_type;
lvds_lfp_data = find_section(bdb, BDB_LVDS_LFP_DATA);
if (!lvds_lfp_data)
@ -282,7 +298,7 @@ parse_lfp_panel_data(struct drm_i915_private *dev_priv,
dev_priv->vbt.lfp_lvds_vbt_mode = panel_fixed_mode;
DRM_DEBUG_KMS("Found panel mode in BIOS VBT tables:\n");
DRM_DEBUG_KMS("Found panel mode in BIOS VBT legacy lfp table:\n");
drm_mode_debug_printmodeline(panel_fixed_mode);
fp_timing = get_lvds_fp_timing(bdb, lvds_lfp_data,
@ -299,6 +315,98 @@ parse_lfp_panel_data(struct drm_i915_private *dev_priv,
}
}
static void
parse_generic_dtd(struct drm_i915_private *dev_priv,
const struct bdb_header *bdb)
{
const struct bdb_generic_dtd *generic_dtd;
const struct generic_dtd_entry *dtd;
struct drm_display_mode *panel_fixed_mode;
int num_dtd;
generic_dtd = find_section(bdb, BDB_GENERIC_DTD);
if (!generic_dtd)
return;
if (generic_dtd->gdtd_size < sizeof(struct generic_dtd_entry)) {
DRM_ERROR("GDTD size %u is too small.\n",
generic_dtd->gdtd_size);
return;
} else if (generic_dtd->gdtd_size !=
sizeof(struct generic_dtd_entry)) {
DRM_ERROR("Unexpected GDTD size %u\n", generic_dtd->gdtd_size);
/* DTD has unknown fields, but keep going */
}
num_dtd = (get_blocksize(generic_dtd) -
sizeof(struct bdb_generic_dtd)) / generic_dtd->gdtd_size;
if (dev_priv->vbt.panel_type >= num_dtd) {
DRM_ERROR("Panel type %d not found in table of %d DTD's\n",
dev_priv->vbt.panel_type, num_dtd);
return;
}
dtd = &generic_dtd->dtd[dev_priv->vbt.panel_type];
panel_fixed_mode = kzalloc(sizeof(*panel_fixed_mode), GFP_KERNEL);
if (!panel_fixed_mode)
return;
panel_fixed_mode->hdisplay = dtd->hactive;
panel_fixed_mode->hsync_start =
panel_fixed_mode->hdisplay + dtd->hfront_porch;
panel_fixed_mode->hsync_end =
panel_fixed_mode->hsync_start + dtd->hsync;
panel_fixed_mode->htotal = panel_fixed_mode->hsync_end;
panel_fixed_mode->vdisplay = dtd->vactive;
panel_fixed_mode->vsync_start =
panel_fixed_mode->vdisplay + dtd->vfront_porch;
panel_fixed_mode->vsync_end =
panel_fixed_mode->vsync_start + dtd->vsync;
panel_fixed_mode->vtotal = panel_fixed_mode->vsync_end;
panel_fixed_mode->clock = dtd->pixel_clock;
panel_fixed_mode->width_mm = dtd->width_mm;
panel_fixed_mode->height_mm = dtd->height_mm;
panel_fixed_mode->type = DRM_MODE_TYPE_PREFERRED;
drm_mode_set_name(panel_fixed_mode);
if (dtd->hsync_positive_polarity)
panel_fixed_mode->flags |= DRM_MODE_FLAG_PHSYNC;
else
panel_fixed_mode->flags |= DRM_MODE_FLAG_NHSYNC;
if (dtd->vsync_positive_polarity)
panel_fixed_mode->flags |= DRM_MODE_FLAG_PVSYNC;
else
panel_fixed_mode->flags |= DRM_MODE_FLAG_NVSYNC;
DRM_DEBUG_KMS("Found panel mode in BIOS VBT generic dtd table:\n");
drm_mode_debug_printmodeline(panel_fixed_mode);
dev_priv->vbt.lfp_lvds_vbt_mode = panel_fixed_mode;
}
static void
parse_panel_dtd(struct drm_i915_private *dev_priv,
const struct bdb_header *bdb)
{
/*
* Older VBTs provided provided DTD information for internal displays
* through the "LFP panel DTD" block (42). As of VBT revision 229,
* that block is now deprecated and DTD information should be provided
* via a newer "generic DTD" block (58). Just to be safe, we'll
* try the new generic DTD block first on VBT >= 229, but still fall
* back to trying the old LFP block if that fails.
*/
if (bdb->version >= 229)
parse_generic_dtd(dev_priv, bdb);
if (!dev_priv->vbt.lfp_lvds_vbt_mode)
parse_lfp_panel_dtd(dev_priv, bdb);
}
static void
parse_lfp_backlight(struct drm_i915_private *dev_priv,
const struct bdb_header *bdb)
@ -449,8 +557,9 @@ static void
parse_sdvo_device_mapping(struct drm_i915_private *dev_priv, u8 bdb_version)
{
struct sdvo_device_mapping *mapping;
const struct display_device_data *devdata;
const struct child_device_config *child;
int i, count = 0;
int count = 0;
/*
* Only parse SDVO mappings on gens that could have SDVO. This isn't
@ -461,8 +570,8 @@ parse_sdvo_device_mapping(struct drm_i915_private *dev_priv, u8 bdb_version)
return;
}
for (i = 0, count = 0; i < dev_priv->vbt.child_dev_num; i++) {
child = dev_priv->vbt.child_dev + i;
list_for_each_entry(devdata, &dev_priv->vbt.display_devices, node) {
child = &devdata->child;
if (child->slave_addr != SLAVE_ADDR1 &&
child->slave_addr != SLAVE_ADDR2) {
@ -552,16 +661,45 @@ parse_driver_features(struct drm_i915_private *dev_priv,
dev_priv->vbt.int_lvds_support = 0;
}
DRM_DEBUG_KMS("DRRS State Enabled:%d\n", driver->drrs_enabled);
if (bdb->version < 228) {
DRM_DEBUG_KMS("DRRS State Enabled:%d\n", driver->drrs_enabled);
/*
* If DRRS is not supported, drrs_type has to be set to 0.
* This is because, VBT is configured in such a way that
* static DRRS is 0 and DRRS not supported is represented by
* driver->drrs_enabled=false
*/
if (!driver->drrs_enabled)
dev_priv->vbt.drrs_type = DRRS_NOT_SUPPORTED;
dev_priv->vbt.psr.enable = driver->psr_enabled;
}
}
static void
parse_power_conservation_features(struct drm_i915_private *dev_priv,
const struct bdb_header *bdb)
{
const struct bdb_lfp_power *power;
u8 panel_type = dev_priv->vbt.panel_type;
if (bdb->version < 228)
return;
power = find_section(bdb, BDB_LVDS_POWER);
if (!power)
return;
dev_priv->vbt.psr.enable = power->psr & BIT(panel_type);
/*
* If DRRS is not supported, drrs_type has to be set to 0.
* This is because, VBT is configured in such a way that
* static DRRS is 0 and DRRS not supported is represented by
* driver->drrs_enabled=false
* power->drrs & BIT(panel_type)=false
*/
if (!driver->drrs_enabled)
if (!(power->drrs & BIT(panel_type)))
dev_priv->vbt.drrs_type = DRRS_NOT_SUPPORTED;
dev_priv->vbt.psr.enable = driver->psr_enabled;
}
static void
@ -1230,6 +1368,57 @@ err:
memset(dev_priv->vbt.dsi.sequence, 0, sizeof(dev_priv->vbt.dsi.sequence));
}
static void
parse_compression_parameters(struct drm_i915_private *i915,
const struct bdb_header *bdb)
{
const struct bdb_compression_parameters *params;
struct display_device_data *devdata;
const struct child_device_config *child;
u16 block_size;
int index;
if (bdb->version < 198)
return;
params = find_section(bdb, BDB_COMPRESSION_PARAMETERS);
if (params) {
/* Sanity checks */
if (params->entry_size != sizeof(params->data[0])) {
DRM_DEBUG_KMS("VBT: unsupported compression param entry size\n");
return;
}
block_size = get_blocksize(params);
if (block_size < sizeof(*params)) {
DRM_DEBUG_KMS("VBT: expected 16 compression param entries\n");
return;
}
}
list_for_each_entry(devdata, &i915->vbt.display_devices, node) {
child = &devdata->child;
if (!child->compression_enable)
continue;
if (!params) {
DRM_DEBUG_KMS("VBT: compression params not available\n");
continue;
}
if (child->compression_method_cps) {
DRM_DEBUG_KMS("VBT: CPS compression not supported\n");
continue;
}
index = child->compression_structure_index;
devdata->dsc = kmemdup(&params->data[index],
sizeof(*devdata->dsc), GFP_KERNEL);
}
}
static u8 translate_iboost(u8 val)
{
static const u8 mapping[] = { 1, 3, 7 }; /* See VBT spec */
@ -1246,7 +1435,7 @@ static enum port get_port_by_ddc_pin(struct drm_i915_private *i915, u8 ddc_pin)
const struct ddi_vbt_port_info *info;
enum port port;
for (port = PORT_A; port < I915_MAX_PORTS; port++) {
for_each_port(port) {
info = &i915->vbt.ddi_port_info[port];
if (info->child && ddc_pin == info->alternate_ddc_pin)
@ -1297,7 +1486,7 @@ static enum port get_port_by_aux_ch(struct drm_i915_private *i915, u8 aux_ch)
const struct ddi_vbt_port_info *info;
enum port port;
for (port = PORT_A; port < I915_MAX_PORTS; port++) {
for_each_port(port) {
info = &i915->vbt.ddi_port_info[port];
if (info->child && aux_ch == info->alternate_aux_channel)
@ -1418,9 +1607,10 @@ static enum port dvo_port_to_port(u8 dvo_port)
}
static void parse_ddi_port(struct drm_i915_private *dev_priv,
const struct child_device_config *child,
struct display_device_data *devdata,
u8 bdb_version)
{
const struct child_device_config *child = &devdata->child;
struct ddi_vbt_port_info *info;
bool is_dvi, is_hdmi, is_dp, is_edp, is_crt;
enum port port;
@ -1443,7 +1633,7 @@ static void parse_ddi_port(struct drm_i915_private *dev_priv,
is_hdmi = is_dvi && (child->device_type & DEVICE_TYPE_NOT_HDMI_OUTPUT) == 0;
is_edp = is_dp && (child->device_type & DEVICE_TYPE_INTERNAL_CONNECTOR);
if (port == PORT_A && is_dvi) {
if (port == PORT_A && is_dvi && INTEL_GEN(dev_priv) < 12) {
DRM_DEBUG_KMS("VBT claims port A supports DVI%s, ignoring\n",
is_hdmi ? "/HDMI" : "");
is_dvi = false;
@ -1461,26 +1651,11 @@ static void parse_ddi_port(struct drm_i915_private *dev_priv,
if (bdb_version >= 209)
info->supports_tbt = child->tbt;
DRM_DEBUG_KMS("Port %c VBT info: CRT:%d DVI:%d HDMI:%d DP:%d eDP:%d LSPCON:%d USB-Type-C:%d TBT:%d\n",
DRM_DEBUG_KMS("Port %c VBT info: CRT:%d DVI:%d HDMI:%d DP:%d eDP:%d LSPCON:%d USB-Type-C:%d TBT:%d DSC:%d\n",
port_name(port), is_crt, is_dvi, is_hdmi, is_dp, is_edp,
HAS_LSPCON(dev_priv) && child->lspcon,
info->supports_typec_usb, info->supports_tbt);
if (is_edp && is_dvi)
DRM_DEBUG_KMS("Internal DP port %c is TMDS compatible\n",
port_name(port));
if (is_crt && port != PORT_E)
DRM_DEBUG_KMS("Port %c is analog\n", port_name(port));
if (is_crt && (is_dvi || is_dp))
DRM_DEBUG_KMS("Analog port %c is also DP or TMDS compatible\n",
port_name(port));
if (is_dvi && (port == PORT_A || port == PORT_E))
DRM_DEBUG_KMS("Port %c is TMDS compatible\n", port_name(port));
if (!is_dvi && !is_dp && !is_crt)
DRM_DEBUG_KMS("Port %c is not DP/TMDS/CRT compatible\n",
port_name(port));
if (is_edp && (port == PORT_B || port == PORT_C || port == PORT_E))
DRM_DEBUG_KMS("Port %c is internal DP\n", port_name(port));
info->supports_typec_usb, info->supports_tbt,
devdata->dsc != NULL);
if (is_dvi) {
u8 ddc_pin;
@ -1509,6 +1684,7 @@ static void parse_ddi_port(struct drm_i915_private *dev_priv,
port_name(port),
hdmi_level_shift);
info->hdmi_level_shift = hdmi_level_shift;
info->hdmi_level_shift_set = true;
}
if (bdb_version >= 204) {
@ -1571,8 +1747,7 @@ static void parse_ddi_port(struct drm_i915_private *dev_priv,
static void parse_ddi_ports(struct drm_i915_private *dev_priv, u8 bdb_version)
{
const struct child_device_config *child;
int i;
struct display_device_data *devdata;
if (!HAS_DDI(dev_priv) && !IS_CHERRYVIEW(dev_priv))
return;
@ -1580,11 +1755,8 @@ static void parse_ddi_ports(struct drm_i915_private *dev_priv, u8 bdb_version)
if (bdb_version < 155)
return;
for (i = 0; i < dev_priv->vbt.child_dev_num; i++) {
child = dev_priv->vbt.child_dev + i;
parse_ddi_port(dev_priv, child, bdb_version);
}
list_for_each_entry(devdata, &dev_priv->vbt.display_devices, node)
parse_ddi_port(dev_priv, devdata, bdb_version);
}
static void
@ -1592,8 +1764,9 @@ parse_general_definitions(struct drm_i915_private *dev_priv,
const struct bdb_header *bdb)
{
const struct bdb_general_definitions *defs;
struct display_device_data *devdata;
const struct child_device_config *child;
int i, child_device_num, count;
int i, child_device_num;
u8 expected_size;
u16 block_size;
int bus_pin;
@ -1649,26 +1822,7 @@ parse_general_definitions(struct drm_i915_private *dev_priv,
/* get the number of child device */
child_device_num = (block_size - sizeof(*defs)) / defs->child_dev_size;
count = 0;
/* get the number of child device that is present */
for (i = 0; i < child_device_num; i++) {
child = child_device_ptr(defs, i);
if (!child->device_type)
continue;
count++;
}
if (!count) {
DRM_DEBUG_KMS("no child dev is parsed from VBT\n");
return;
}
dev_priv->vbt.child_dev = kcalloc(count, sizeof(*child), GFP_KERNEL);
if (!dev_priv->vbt.child_dev) {
DRM_DEBUG_KMS("No memory space for child device\n");
return;
}
dev_priv->vbt.child_dev_num = count;
count = 0;
for (i = 0; i < child_device_num; i++) {
child = child_device_ptr(defs, i);
if (!child->device_type)
@ -1677,23 +1831,29 @@ parse_general_definitions(struct drm_i915_private *dev_priv,
DRM_DEBUG_KMS("Found VBT child device with type 0x%x\n",
child->device_type);
devdata = kzalloc(sizeof(*devdata), GFP_KERNEL);
if (!devdata)
break;
/*
* Copy as much as we know (sizeof) and is available
* (child_dev_size) of the child device. Accessing the data must
* depend on VBT version.
* (child_dev_size) of the child device config. Accessing the
* data must depend on VBT version.
*/
memcpy(dev_priv->vbt.child_dev + count, child,
memcpy(&devdata->child, child,
min_t(size_t, defs->child_dev_size, sizeof(*child)));
count++;
list_add_tail(&devdata->node, &dev_priv->vbt.display_devices);
}
if (list_empty(&dev_priv->vbt.display_devices))
DRM_DEBUG_KMS("no child dev is parsed from VBT\n");
}
/* Common defaults which may be overridden by VBT. */
static void
init_vbt_defaults(struct drm_i915_private *dev_priv)
{
enum port port;
dev_priv->vbt.crt_ddc_pin = GMBUS_PIN_VGADDC;
/* Default to having backlight */
@ -1721,13 +1881,6 @@ init_vbt_defaults(struct drm_i915_private *dev_priv)
dev_priv->vbt.lvds_ssc_freq = intel_bios_ssc_frequency(dev_priv,
!HAS_PCH_SPLIT(dev_priv));
DRM_DEBUG_KMS("Set default to SSC at %d kHz\n", dev_priv->vbt.lvds_ssc_freq);
for (port = PORT_A; port < I915_MAX_PORTS; port++) {
struct ddi_vbt_port_info *info =
&dev_priv->vbt.ddi_port_info[port];
info->hdmi_level_shift = HDMI_LEVEL_SHIFT_UNKNOWN;
}
}
/* Defaults to initialize only if there is no VBT. */
@ -1736,7 +1889,7 @@ init_vbt_missing_defaults(struct drm_i915_private *dev_priv)
{
enum port port;
for (port = PORT_A; port < I915_MAX_PORTS; port++) {
for_each_port(port) {
struct ddi_vbt_port_info *info =
&dev_priv->vbt.ddi_port_info[port];
enum phy phy = intel_port_to_phy(dev_priv, port);
@ -1787,6 +1940,13 @@ bool intel_bios_is_valid_vbt(const void *buf, size_t size)
return false;
}
if (vbt->vbt_size > size) {
DRM_DEBUG_DRIVER("VBT incomplete (vbt_size overflows)\n");
return false;
}
size = vbt->vbt_size;
if (range_overflows_t(size_t,
vbt->bdb_offset,
sizeof(struct bdb_header),
@ -1804,28 +1964,61 @@ bool intel_bios_is_valid_vbt(const void *buf, size_t size)
return vbt;
}
static const struct vbt_header *find_vbt(void __iomem *bios, size_t size)
static struct vbt_header *oprom_get_vbt(struct drm_i915_private *dev_priv)
{
size_t i;
struct pci_dev *pdev = dev_priv->drm.pdev;
void __iomem *p = NULL, *oprom;
struct vbt_header *vbt;
u16 vbt_size;
size_t i, size;
oprom = pci_map_rom(pdev, &size);
if (!oprom)
return NULL;
/* Scour memory looking for the VBT signature. */
for (i = 0; i + 4 < size; i++) {
void *vbt;
if (ioread32(bios + i) != *((const u32 *) "$VBT"))
for (i = 0; i + 4 < size; i += 4) {
if (ioread32(oprom + i) != *((const u32 *)"$VBT"))
continue;
/*
* This is the one place where we explicitly discard the address
* space (__iomem) of the BIOS/VBT.
*/
vbt = (void __force *) bios + i;
if (intel_bios_is_valid_vbt(vbt, size - i))
return vbt;
p = oprom + i;
size -= i;
break;
}
if (!p)
goto err_unmap_oprom;
if (sizeof(struct vbt_header) > size) {
DRM_DEBUG_DRIVER("VBT header incomplete\n");
goto err_unmap_oprom;
}
vbt_size = ioread16(p + offsetof(struct vbt_header, vbt_size));
if (vbt_size > size) {
DRM_DEBUG_DRIVER("VBT incomplete (vbt_size overflows)\n");
goto err_unmap_oprom;
}
/* The rest will be validated by intel_bios_is_valid_vbt() */
vbt = kmalloc(vbt_size, GFP_KERNEL);
if (!vbt)
goto err_unmap_oprom;
memcpy_fromio(vbt, p, vbt_size);
if (!intel_bios_is_valid_vbt(vbt, vbt_size))
goto err_free_vbt;
pci_unmap_rom(pdev, oprom);
return vbt;
err_free_vbt:
kfree(vbt);
err_unmap_oprom:
pci_unmap_rom(pdev, oprom);
return NULL;
}
@ -1839,10 +2032,11 @@ static const struct vbt_header *find_vbt(void __iomem *bios, size_t size)
*/
void intel_bios_init(struct drm_i915_private *dev_priv)
{
struct pci_dev *pdev = dev_priv->drm.pdev;
const struct vbt_header *vbt = dev_priv->opregion.vbt;
struct vbt_header *oprom_vbt = NULL;
const struct bdb_header *bdb;
u8 __iomem *bios = NULL;
INIT_LIST_HEAD(&dev_priv->vbt.display_devices);
if (!HAS_DISPLAY(dev_priv) || !INTEL_DISPLAY_ENABLED(dev_priv)) {
DRM_DEBUG_KMS("Skipping VBT init due to disabled display.\n");
@ -1853,15 +2047,11 @@ void intel_bios_init(struct drm_i915_private *dev_priv)
/* If the OpRegion does not have VBT, look in PCI ROM. */
if (!vbt) {
size_t size;
bios = pci_map_rom(pdev, &size);
if (!bios)
oprom_vbt = oprom_get_vbt(dev_priv);
if (!oprom_vbt)
goto out;
vbt = find_vbt(bios, size);
if (!vbt)
goto out;
vbt = oprom_vbt;
DRM_DEBUG_KMS("Found valid VBT in PCI ROM\n");
}
@ -1874,15 +2064,20 @@ void intel_bios_init(struct drm_i915_private *dev_priv)
/* Grab useful general definitions */
parse_general_features(dev_priv, bdb);
parse_general_definitions(dev_priv, bdb);
parse_lfp_panel_data(dev_priv, bdb);
parse_panel_options(dev_priv, bdb);
parse_panel_dtd(dev_priv, bdb);
parse_lfp_backlight(dev_priv, bdb);
parse_sdvo_panel_data(dev_priv, bdb);
parse_driver_features(dev_priv, bdb);
parse_power_conservation_features(dev_priv, bdb);
parse_edp(dev_priv, bdb);
parse_psr(dev_priv, bdb);
parse_mipi_config(dev_priv, bdb);
parse_mipi_sequence(dev_priv, bdb);
/* Depends on child device list */
parse_compression_parameters(dev_priv, bdb);
/* Further processing on pre-parsed data */
parse_sdvo_device_mapping(dev_priv, bdb->version);
parse_ddi_ports(dev_priv, bdb->version);
@ -1893,8 +2088,7 @@ out:
init_vbt_missing_defaults(dev_priv);
}
if (bios)
pci_unmap_rom(pdev, bios);
kfree(oprom_vbt);
}
/**
@ -1903,9 +2097,14 @@ out:
*/
void intel_bios_driver_remove(struct drm_i915_private *dev_priv)
{
kfree(dev_priv->vbt.child_dev);
dev_priv->vbt.child_dev = NULL;
dev_priv->vbt.child_dev_num = 0;
struct display_device_data *devdata, *n;
list_for_each_entry_safe(devdata, n, &dev_priv->vbt.display_devices, node) {
list_del(&devdata->node);
kfree(devdata->dsc);
kfree(devdata);
}
kfree(dev_priv->vbt.sdvo_lvds_vbt_mode);
dev_priv->vbt.sdvo_lvds_vbt_mode = NULL;
kfree(dev_priv->vbt.lfp_lvds_vbt_mode);
@ -1929,17 +2128,18 @@ void intel_bios_driver_remove(struct drm_i915_private *dev_priv)
*/
bool intel_bios_is_tv_present(struct drm_i915_private *dev_priv)
{
const struct display_device_data *devdata;
const struct child_device_config *child;
int i;
if (!dev_priv->vbt.int_tv_support)
return false;
if (!dev_priv->vbt.child_dev_num)
if (list_empty(&dev_priv->vbt.display_devices))
return true;
for (i = 0; i < dev_priv->vbt.child_dev_num; i++) {
child = dev_priv->vbt.child_dev + i;
list_for_each_entry(devdata, &dev_priv->vbt.display_devices, node) {
child = &devdata->child;
/*
* If the device type is not TV, continue.
*/
@ -1971,14 +2171,14 @@ bool intel_bios_is_tv_present(struct drm_i915_private *dev_priv)
*/
bool intel_bios_is_lvds_present(struct drm_i915_private *dev_priv, u8 *i2c_pin)
{
const struct display_device_data *devdata;
const struct child_device_config *child;
int i;
if (!dev_priv->vbt.child_dev_num)
if (list_empty(&dev_priv->vbt.display_devices))
return true;
for (i = 0; i < dev_priv->vbt.child_dev_num; i++) {
child = dev_priv->vbt.child_dev + i;
list_for_each_entry(devdata, &dev_priv->vbt.display_devices, node) {
child = &devdata->child;
/* If the device type is not LFP, continue.
* We have to check both the new identifiers as well as the
@ -2020,6 +2220,7 @@ bool intel_bios_is_lvds_present(struct drm_i915_private *dev_priv, u8 *i2c_pin)
*/
bool intel_bios_is_port_present(struct drm_i915_private *dev_priv, enum port port)
{
const struct display_device_data *devdata;
const struct child_device_config *child;
static const struct {
u16 dp, hdmi;
@ -2030,7 +2231,6 @@ bool intel_bios_is_port_present(struct drm_i915_private *dev_priv, enum port por
[PORT_E] = { DVO_PORT_DPE, DVO_PORT_HDMIE, },
[PORT_F] = { DVO_PORT_DPF, DVO_PORT_HDMIF, },
};
int i;
if (HAS_DDI(dev_priv)) {
const struct ddi_vbt_port_info *port_info =
@ -2045,11 +2245,8 @@ bool intel_bios_is_port_present(struct drm_i915_private *dev_priv, enum port por
if (WARN_ON(port == PORT_A) || port >= ARRAY_SIZE(port_mapping))
return false;
if (!dev_priv->vbt.child_dev_num)
return false;
for (i = 0; i < dev_priv->vbt.child_dev_num; i++) {
child = dev_priv->vbt.child_dev + i;
list_for_each_entry(devdata, &dev_priv->vbt.display_devices, node) {
child = &devdata->child;
if ((child->dvo_port == port_mapping[port].dp ||
child->dvo_port == port_mapping[port].hdmi) &&
@ -2070,6 +2267,7 @@ bool intel_bios_is_port_present(struct drm_i915_private *dev_priv, enum port por
*/
bool intel_bios_is_port_edp(struct drm_i915_private *dev_priv, enum port port)
{
const struct display_device_data *devdata;
const struct child_device_config *child;
static const short port_mapping[] = {
[PORT_B] = DVO_PORT_DPB,
@ -2078,16 +2276,12 @@ bool intel_bios_is_port_edp(struct drm_i915_private *dev_priv, enum port port)
[PORT_E] = DVO_PORT_DPE,
[PORT_F] = DVO_PORT_DPF,
};
int i;
if (HAS_DDI(dev_priv))
return dev_priv->vbt.ddi_port_info[port].supports_edp;
if (!dev_priv->vbt.child_dev_num)
return false;
for (i = 0; i < dev_priv->vbt.child_dev_num; i++) {
child = dev_priv->vbt.child_dev + i;
list_for_each_entry(devdata, &dev_priv->vbt.display_devices, node) {
child = &devdata->child;
if (child->dvo_port == port_mapping[port] &&
(child->device_type & DEVICE_TYPE_eDP_BITS) ==
@ -2136,13 +2330,10 @@ static bool child_dev_is_dp_dual_mode(const struct child_device_config *child,
bool intel_bios_is_port_dp_dual_mode(struct drm_i915_private *dev_priv,
enum port port)
{
const struct child_device_config *child;
int i;
const struct display_device_data *devdata;
for (i = 0; i < dev_priv->vbt.child_dev_num; i++) {
child = dev_priv->vbt.child_dev + i;
if (child_dev_is_dp_dual_mode(child, port))
list_for_each_entry(devdata, &dev_priv->vbt.display_devices, node) {
if (child_dev_is_dp_dual_mode(&devdata->child, port))
return true;
}
@ -2159,12 +2350,12 @@ bool intel_bios_is_port_dp_dual_mode(struct drm_i915_private *dev_priv,
bool intel_bios_is_dsi_present(struct drm_i915_private *dev_priv,
enum port *port)
{
const struct display_device_data *devdata;
const struct child_device_config *child;
u8 dvo_port;
int i;
for (i = 0; i < dev_priv->vbt.child_dev_num; i++) {
child = dev_priv->vbt.child_dev + i;
list_for_each_entry(devdata, &dev_priv->vbt.display_devices, node) {
child = &devdata->child;
if (!(child->device_type & DEVICE_TYPE_MIPI_OUTPUT))
continue;
@ -2188,6 +2379,104 @@ bool intel_bios_is_dsi_present(struct drm_i915_private *dev_priv,
return false;
}
static void fill_dsc(struct intel_crtc_state *crtc_state,
struct dsc_compression_parameters_entry *dsc,
int dsc_max_bpc)
{
struct drm_dsc_config *vdsc_cfg = &crtc_state->dsc.config;
int bpc = 8;
vdsc_cfg->dsc_version_major = dsc->version_major;
vdsc_cfg->dsc_version_minor = dsc->version_minor;
if (dsc->support_12bpc && dsc_max_bpc >= 12)
bpc = 12;
else if (dsc->support_10bpc && dsc_max_bpc >= 10)
bpc = 10;
else if (dsc->support_8bpc && dsc_max_bpc >= 8)
bpc = 8;
else
DRM_DEBUG_KMS("VBT: Unsupported BPC %d for DCS\n",
dsc_max_bpc);
crtc_state->pipe_bpp = bpc * 3;
crtc_state->dsc.compressed_bpp = min(crtc_state->pipe_bpp,
VBT_DSC_MAX_BPP(dsc->max_bpp));
/*
* FIXME: This is ugly, and slice count should take DSC engine
* throughput etc. into account.
*
* Also, per spec DSI supports 1, 2, 3 or 4 horizontal slices.
*/
if (dsc->slices_per_line & BIT(2)) {
crtc_state->dsc.slice_count = 4;
} else if (dsc->slices_per_line & BIT(1)) {
crtc_state->dsc.slice_count = 2;
} else {
/* FIXME */
if (!(dsc->slices_per_line & BIT(0)))
DRM_DEBUG_KMS("VBT: Unsupported DSC slice count for DSI\n");
crtc_state->dsc.slice_count = 1;
}
if (crtc_state->hw.adjusted_mode.crtc_hdisplay %
crtc_state->dsc.slice_count != 0)
DRM_DEBUG_KMS("VBT: DSC hdisplay %d not divisible by slice count %d\n",
crtc_state->hw.adjusted_mode.crtc_hdisplay,
crtc_state->dsc.slice_count);
/*
* FIXME: Use VBT rc_buffer_block_size and rc_buffer_size for the
* implementation specific physical rate buffer size. Currently we use
* the required rate buffer model size calculated in
* drm_dsc_compute_rc_parameters() according to VESA DSC Annex E.
*
* The VBT rc_buffer_block_size and rc_buffer_size definitions
* correspond to DP 1.4 DPCD offsets 0x62 and 0x63. The DP DSC
* implementation should also use the DPCD (or perhaps VBT for eDP)
* provided value for the buffer size.
*/
/* FIXME: DSI spec says bpc + 1 for this one */
vdsc_cfg->line_buf_depth = VBT_DSC_LINE_BUFFER_DEPTH(dsc->line_buffer_depth);
vdsc_cfg->block_pred_enable = dsc->block_prediction_enable;
vdsc_cfg->slice_height = dsc->slice_height;
}
/* FIXME: initially DSI specific */
bool intel_bios_get_dsc_params(struct intel_encoder *encoder,
struct intel_crtc_state *crtc_state,
int dsc_max_bpc)
{
struct drm_i915_private *i915 = to_i915(encoder->base.dev);
const struct display_device_data *devdata;
const struct child_device_config *child;
list_for_each_entry(devdata, &i915->vbt.display_devices, node) {
child = &devdata->child;
if (!(child->device_type & DEVICE_TYPE_MIPI_OUTPUT))
continue;
if (child->dvo_port - DVO_PORT_MIPIA == encoder->port) {
if (!devdata->dsc)
return false;
if (crtc_state)
fill_dsc(crtc_state, devdata->dsc, dsc_max_bpc);
return true;
}
}
return false;
}
/**
* intel_bios_is_port_hpd_inverted - is HPD inverted for %port
* @i915: i915 device instance

Просмотреть файл

@ -35,6 +35,8 @@
#include <drm/i915_drm.h>
struct drm_i915_private;
struct intel_crtc_state;
struct intel_encoder;
enum port;
enum intel_backlight_type {
@ -242,5 +244,8 @@ bool intel_bios_is_port_hpd_inverted(const struct drm_i915_private *i915,
bool intel_bios_is_lspcon_present(const struct drm_i915_private *i915,
enum port port);
enum aux_ch intel_bios_port_aux_ch(struct drm_i915_private *dev_priv, enum port port);
bool intel_bios_get_dsc_params(struct intel_encoder *encoder,
struct intel_crtc_state *crtc_state,
int dsc_max_bpc);
#endif /* _INTEL_BIOS_H_ */

Просмотреть файл

@ -15,7 +15,7 @@ struct intel_qgv_point {
};
struct intel_qgv_info {
struct intel_qgv_point points[3];
struct intel_qgv_point points[I915_NUM_QGV_POINTS];
u8 num_points;
u8 num_channels;
u8 t_bl;
@ -264,6 +264,9 @@ static unsigned int icl_max_bw(struct drm_i915_private *dev_priv,
void intel_bw_init_hw(struct drm_i915_private *dev_priv)
{
if (!HAS_DISPLAY(dev_priv))
return;
if (IS_GEN(dev_priv, 12))
icl_get_bw_info(dev_priv, &tgl_sa_info);
else if (IS_GEN(dev_priv, 11))
@ -273,17 +276,29 @@ void intel_bw_init_hw(struct drm_i915_private *dev_priv)
static unsigned int intel_max_data_rate(struct drm_i915_private *dev_priv,
int num_planes)
{
if (INTEL_GEN(dev_priv) >= 11)
if (INTEL_GEN(dev_priv) >= 11) {
/*
* Any bw group has same amount of QGV points
*/
const struct intel_bw_info *bi =
&dev_priv->max_bw[0];
unsigned int min_bw = UINT_MAX;
int i;
/*
* FIXME with SAGV disabled maybe we can assume
* point 1 will always be used? Seems to match
* the behaviour observed in the wild.
*/
return min3(icl_max_bw(dev_priv, num_planes, 0),
icl_max_bw(dev_priv, num_planes, 1),
icl_max_bw(dev_priv, num_planes, 2));
else
for (i = 0; i < bi->num_qgv_points; i++) {
unsigned int bw = icl_max_bw(dev_priv, num_planes, i);
min_bw = min(bw, min_bw);
}
return min_bw;
} else {
return UINT_MAX;
}
}
static unsigned int intel_bw_crtc_num_active_planes(const struct intel_crtc_state *crtc_state)
@ -297,7 +312,7 @@ static unsigned int intel_bw_crtc_num_active_planes(const struct intel_crtc_stat
static unsigned int intel_bw_crtc_data_rate(const struct intel_crtc_state *crtc_state)
{
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
unsigned int data_rate = 0;
enum plane_id plane_id;
@ -318,7 +333,7 @@ static unsigned int intel_bw_crtc_data_rate(const struct intel_crtc_state *crtc_
void intel_bw_crtc_update(struct intel_bw_state *bw_state,
const struct intel_crtc_state *crtc_state)
{
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
bw_state->data_rate[crtc->pipe] =
intel_bw_crtc_data_rate(crtc_state);

Просмотреть файл

@ -1904,7 +1904,7 @@ intel_set_cdclk_post_plane_update(struct drm_i915_private *dev_priv,
static int intel_pixel_rate_to_cdclk(const struct intel_crtc_state *crtc_state)
{
struct drm_i915_private *dev_priv = to_i915(crtc_state->base.crtc->dev);
struct drm_i915_private *dev_priv = to_i915(crtc_state->uapi.crtc->dev);
int pixel_rate = crtc_state->pixel_rate;
if (INTEL_GEN(dev_priv) >= 10 || IS_GEMINILAKE(dev_priv))
@ -1922,7 +1922,7 @@ static int intel_pixel_rate_to_cdclk(const struct intel_crtc_state *crtc_state)
static int intel_planes_min_cdclk(const struct intel_crtc_state *crtc_state)
{
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
struct intel_plane *plane;
int min_cdclk = 0;
@ -1936,10 +1936,10 @@ static int intel_planes_min_cdclk(const struct intel_crtc_state *crtc_state)
int intel_crtc_compute_min_cdclk(const struct intel_crtc_state *crtc_state)
{
struct drm_i915_private *dev_priv =
to_i915(crtc_state->base.crtc->dev);
to_i915(crtc_state->uapi.crtc->dev);
int min_cdclk;
if (!crtc_state->base.enable)
if (!crtc_state->hw.enable)
return 0;
min_cdclk = intel_pixel_rate_to_cdclk(crtc_state);
@ -2076,7 +2076,7 @@ static int bxt_compute_min_voltage_level(struct intel_atomic_state *state)
for_each_new_intel_crtc_in_state(state, crtc, crtc_state, i) {
int ret;
if (crtc_state->base.enable)
if (crtc_state->hw.enable)
min_voltage_level = crtc_state->min_voltage_level;
else
min_voltage_level = 0;
@ -2170,7 +2170,7 @@ static int skl_dpll0_vco(struct intel_atomic_state *state)
vco = dev_priv->skl_preferred_vco_freq;
for_each_new_intel_crtc_in_state(state, crtc, crtc_state, i) {
if (!crtc_state->base.enable)
if (!crtc_state->hw.enable)
continue;
if (!intel_crtc_has_type(crtc_state, INTEL_OUTPUT_EDP))
@ -2283,11 +2283,11 @@ static int intel_modeset_all_pipes(struct intel_atomic_state *state)
if (IS_ERR(crtc_state))
return PTR_ERR(crtc_state);
if (!crtc_state->base.active ||
drm_atomic_crtc_needs_modeset(&crtc_state->base))
if (!crtc_state->hw.active ||
drm_atomic_crtc_needs_modeset(&crtc_state->uapi))
continue;
crtc_state->base.mode_changed = true;
crtc_state->uapi.mode_changed = true;
ret = drm_atomic_add_affected_connectors(&state->base,
&crtc->base);
@ -2368,7 +2368,7 @@ int intel_modeset_calc_cdclk(struct intel_atomic_state *state)
if (IS_ERR(crtc_state))
return PTR_ERR(crtc_state);
if (drm_atomic_crtc_needs_modeset(&crtc_state->base))
if (drm_atomic_crtc_needs_modeset(&crtc_state->uapi))
pipe = INVALID_PIPE;
} else {
pipe = INVALID_PIPE;

Просмотреть файл

@ -117,10 +117,10 @@ static bool lut_is_legacy(const struct drm_property_blob *lut)
static bool crtc_state_is_legacy_gamma(const struct intel_crtc_state *crtc_state)
{
return !crtc_state->base.degamma_lut &&
!crtc_state->base.ctm &&
crtc_state->base.gamma_lut &&
lut_is_legacy(crtc_state->base.gamma_lut);
return !crtc_state->hw.degamma_lut &&
!crtc_state->hw.ctm &&
crtc_state->hw.gamma_lut &&
lut_is_legacy(crtc_state->hw.gamma_lut);
}
/*
@ -205,7 +205,7 @@ static void icl_update_output_csc(struct intel_crtc *crtc,
static bool ilk_csc_limited_range(const struct intel_crtc_state *crtc_state)
{
struct drm_i915_private *dev_priv = to_i915(crtc_state->base.crtc->dev);
struct drm_i915_private *dev_priv = to_i915(crtc_state->uapi.crtc->dev);
/*
* FIXME if there's a gamma LUT after the CSC, we should
@ -219,7 +219,7 @@ static bool ilk_csc_limited_range(const struct intel_crtc_state *crtc_state)
static void ilk_csc_convert_ctm(const struct intel_crtc_state *crtc_state,
u16 coeffs[9])
{
const struct drm_color_ctm *ctm = crtc_state->base.ctm->data;
const struct drm_color_ctm *ctm = crtc_state->hw.ctm->data;
const u64 *input;
u64 temp[9];
int i;
@ -270,11 +270,11 @@ static void ilk_csc_convert_ctm(const struct intel_crtc_state *crtc_state,
static void ilk_load_csc_matrix(const struct intel_crtc_state *crtc_state)
{
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
bool limited_color_range = ilk_csc_limited_range(crtc_state);
if (crtc_state->base.ctm) {
if (crtc_state->hw.ctm) {
u16 coeff[9];
ilk_csc_convert_ctm(crtc_state, coeff);
@ -309,10 +309,10 @@ static void ilk_load_csc_matrix(const struct intel_crtc_state *crtc_state)
static void icl_load_csc_matrix(const struct intel_crtc_state *crtc_state)
{
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
if (crtc_state->base.ctm) {
if (crtc_state->hw.ctm) {
u16 coeff[9];
ilk_csc_convert_ctm(crtc_state, coeff);
@ -338,12 +338,12 @@ static void icl_load_csc_matrix(const struct intel_crtc_state *crtc_state)
*/
static void cherryview_load_csc_matrix(const struct intel_crtc_state *crtc_state)
{
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
enum pipe pipe = crtc->pipe;
if (crtc_state->base.ctm) {
const struct drm_color_ctm *ctm = crtc_state->base.ctm->data;
if (crtc_state->hw.ctm) {
const struct drm_color_ctm *ctm = crtc_state->hw.ctm->data;
u16 coeffs[9] = {};
int i;
@ -404,7 +404,7 @@ static u32 ilk_lut_10(const struct drm_color_lut *color)
static void i9xx_load_luts_internal(const struct intel_crtc_state *crtc_state,
const struct drm_property_blob *blob)
{
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
enum pipe pipe = crtc->pipe;
int i;
@ -435,12 +435,12 @@ static void i9xx_load_luts_internal(const struct intel_crtc_state *crtc_state,
static void i9xx_load_luts(const struct intel_crtc_state *crtc_state)
{
i9xx_load_luts_internal(crtc_state, crtc_state->base.gamma_lut);
i9xx_load_luts_internal(crtc_state, crtc_state->hw.gamma_lut);
}
static void i9xx_color_commit(const struct intel_crtc_state *crtc_state)
{
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
enum pipe pipe = crtc->pipe;
u32 val;
@ -453,7 +453,7 @@ static void i9xx_color_commit(const struct intel_crtc_state *crtc_state)
static void ilk_color_commit(const struct intel_crtc_state *crtc_state)
{
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
enum pipe pipe = crtc->pipe;
u32 val;
@ -468,7 +468,7 @@ static void ilk_color_commit(const struct intel_crtc_state *crtc_state)
static void hsw_color_commit(const struct intel_crtc_state *crtc_state)
{
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
I915_WRITE(GAMMA_MODE(crtc->pipe), crtc_state->gamma_mode);
@ -478,7 +478,7 @@ static void hsw_color_commit(const struct intel_crtc_state *crtc_state)
static void skl_color_commit(const struct intel_crtc_state *crtc_state)
{
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
enum pipe pipe = crtc->pipe;
u32 val = 0;
@ -524,8 +524,8 @@ static void i965_load_lut_10p6(struct intel_crtc *crtc,
static void i965_load_luts(const struct intel_crtc_state *crtc_state)
{
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
const struct drm_property_blob *gamma_lut = crtc_state->base.gamma_lut;
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
const struct drm_property_blob *gamma_lut = crtc_state->hw.gamma_lut;
if (crtc_state->gamma_mode == GAMMA_MODE_MODE_8BIT)
i9xx_load_luts(crtc_state);
@ -547,8 +547,8 @@ static void ilk_load_lut_10(struct intel_crtc *crtc,
static void ilk_load_luts(const struct intel_crtc_state *crtc_state)
{
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
const struct drm_property_blob *gamma_lut = crtc_state->base.gamma_lut;
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
const struct drm_property_blob *gamma_lut = crtc_state->hw.gamma_lut;
if (crtc_state->gamma_mode == GAMMA_MODE_MODE_8BIT)
i9xx_load_luts(crtc_state);
@ -654,9 +654,9 @@ static void ivb_load_lut_ext_max(struct intel_crtc *crtc)
static void ivb_load_luts(const struct intel_crtc_state *crtc_state)
{
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
const struct drm_property_blob *gamma_lut = crtc_state->base.gamma_lut;
const struct drm_property_blob *degamma_lut = crtc_state->base.degamma_lut;
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
const struct drm_property_blob *gamma_lut = crtc_state->hw.gamma_lut;
const struct drm_property_blob *degamma_lut = crtc_state->hw.degamma_lut;
if (crtc_state->gamma_mode == GAMMA_MODE_MODE_8BIT) {
i9xx_load_luts(crtc_state);
@ -677,9 +677,9 @@ static void ivb_load_luts(const struct intel_crtc_state *crtc_state)
static void bdw_load_luts(const struct intel_crtc_state *crtc_state)
{
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
const struct drm_property_blob *gamma_lut = crtc_state->base.gamma_lut;
const struct drm_property_blob *degamma_lut = crtc_state->base.degamma_lut;
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
const struct drm_property_blob *gamma_lut = crtc_state->hw.gamma_lut;
const struct drm_property_blob *degamma_lut = crtc_state->hw.degamma_lut;
if (crtc_state->gamma_mode == GAMMA_MODE_MODE_8BIT) {
i9xx_load_luts(crtc_state);
@ -700,11 +700,11 @@ static void bdw_load_luts(const struct intel_crtc_state *crtc_state)
static void glk_load_degamma_lut(const struct intel_crtc_state *crtc_state)
{
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
enum pipe pipe = crtc->pipe;
const u32 lut_size = INTEL_INFO(dev_priv)->color.degamma_lut_size;
const struct drm_color_lut *lut = crtc_state->base.degamma_lut->data;
const struct drm_color_lut *lut = crtc_state->hw.degamma_lut->data;
u32 i;
/*
@ -739,7 +739,7 @@ static void glk_load_degamma_lut(const struct intel_crtc_state *crtc_state)
static void glk_load_degamma_lut_linear(const struct intel_crtc_state *crtc_state)
{
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
enum pipe pipe = crtc->pipe;
const u32 lut_size = INTEL_INFO(dev_priv)->color.degamma_lut_size;
@ -766,8 +766,8 @@ static void glk_load_degamma_lut_linear(const struct intel_crtc_state *crtc_stat
static void glk_load_luts(const struct intel_crtc_state *crtc_state)
{
const struct drm_property_blob *gamma_lut = crtc_state->base.gamma_lut;
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
const struct drm_property_blob *gamma_lut = crtc_state->hw.gamma_lut;
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
/*
* On GLK+ both pipe CSC and degamma LUT are controlled
@ -777,7 +777,7 @@ static void glk_load_luts(const struct intel_crtc_state *crtc_state)
* the degama LUT so that we don't have to reload
* it every time the pipe CSC is being enabled.
*/
if (crtc_state->base.degamma_lut)
if (crtc_state->hw.degamma_lut)
glk_load_degamma_lut(crtc_state);
else
glk_load_degamma_lut_linear(crtc_state);
@ -808,7 +808,7 @@ static void
icl_load_gcmax(const struct intel_crtc_state *crtc_state,
const struct drm_color_lut *color)
{
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
struct intel_dsb *dsb = intel_dsb_get(crtc);
enum pipe pipe = crtc->pipe;
@ -822,8 +822,8 @@ icl_load_gcmax(const struct intel_crtc_state *crtc_state,
static void
icl_program_gamma_superfine_segment(const struct intel_crtc_state *crtc_state)
{
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
const struct drm_property_blob *blob = crtc_state->base.gamma_lut;
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
const struct drm_property_blob *blob = crtc_state->hw.gamma_lut;
const struct drm_color_lut *lut = blob->data;
struct intel_dsb *dsb = intel_dsb_get(crtc);
enum pipe pipe = crtc->pipe;
@ -854,8 +854,8 @@ icl_program_gamma_superfine_segment(const struct intel_crtc_state *crtc_state)
static void
icl_program_gamma_multi_segment(const struct intel_crtc_state *crtc_state)
{
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
const struct drm_property_blob *blob = crtc_state->base.gamma_lut;
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
const struct drm_property_blob *blob = crtc_state->hw.gamma_lut;
const struct drm_color_lut *lut = blob->data;
const struct drm_color_lut *entry;
struct intel_dsb *dsb = intel_dsb_get(crtc);
@ -910,11 +910,11 @@ icl_program_gamma_multi_segment(const struct intel_crtc_state *crtc_state)
static void icl_load_luts(const struct intel_crtc_state *crtc_state)
{
const struct drm_property_blob *gamma_lut = crtc_state->base.gamma_lut;
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
const struct drm_property_blob *gamma_lut = crtc_state->hw.gamma_lut;
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
struct intel_dsb *dsb = intel_dsb_get(crtc);
if (crtc_state->base.degamma_lut)
if (crtc_state->hw.degamma_lut)
glk_load_degamma_lut(crtc_state);
switch (crtc_state->gamma_mode & GAMMA_MODE_MODE_MASK) {
@ -990,9 +990,9 @@ static void chv_load_cgm_gamma(struct intel_crtc *crtc,
static void chv_load_luts(const struct intel_crtc_state *crtc_state)
{
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
const struct drm_property_blob *gamma_lut = crtc_state->base.gamma_lut;
const struct drm_property_blob *degamma_lut = crtc_state->base.degamma_lut;
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
const struct drm_property_blob *gamma_lut = crtc_state->hw.gamma_lut;
const struct drm_property_blob *degamma_lut = crtc_state->hw.degamma_lut;
cherryview_load_csc_matrix(crtc_state);
@ -1010,35 +1010,35 @@ static void chv_load_luts(const struct intel_crtc_state *crtc_state)
void intel_color_load_luts(const struct intel_crtc_state *crtc_state)
{
struct drm_i915_private *dev_priv = to_i915(crtc_state->base.crtc->dev);
struct drm_i915_private *dev_priv = to_i915(crtc_state->uapi.crtc->dev);
dev_priv->display.load_luts(crtc_state);
}
void intel_color_commit(const struct intel_crtc_state *crtc_state)
{
struct drm_i915_private *dev_priv = to_i915(crtc_state->base.crtc->dev);
struct drm_i915_private *dev_priv = to_i915(crtc_state->uapi.crtc->dev);
dev_priv->display.color_commit(crtc_state);
}
static bool intel_can_preload_luts(const struct intel_crtc_state *new_crtc_state)
{
struct intel_crtc *crtc = to_intel_crtc(new_crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(new_crtc_state->uapi.crtc);
struct intel_atomic_state *state =
to_intel_atomic_state(new_crtc_state->base.state);
to_intel_atomic_state(new_crtc_state->uapi.state);
const struct intel_crtc_state *old_crtc_state =
intel_atomic_get_old_crtc_state(state, crtc);
return !old_crtc_state->base.gamma_lut &&
!old_crtc_state->base.degamma_lut;
return !old_crtc_state->hw.gamma_lut &&
!old_crtc_state->hw.degamma_lut;
}
static bool chv_can_preload_luts(const struct intel_crtc_state *new_crtc_state)
{
struct intel_crtc *crtc = to_intel_crtc(new_crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(new_crtc_state->uapi.crtc);
struct intel_atomic_state *state =
to_intel_atomic_state(new_crtc_state->base.state);
to_intel_atomic_state(new_crtc_state->uapi.state);
const struct intel_crtc_state *old_crtc_state =
intel_atomic_get_old_crtc_state(state, crtc);
@ -1050,14 +1050,14 @@ static bool chv_can_preload_luts(const struct intel_crtc_state *new_crtc_state)
if (old_crtc_state->cgm_mode || new_crtc_state->cgm_mode)
return false;
return !old_crtc_state->base.gamma_lut;
return !old_crtc_state->hw.gamma_lut;
}
static bool glk_can_preload_luts(const struct intel_crtc_state *new_crtc_state)
{
struct intel_crtc *crtc = to_intel_crtc(new_crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(new_crtc_state->uapi.crtc);
struct intel_atomic_state *state =
to_intel_atomic_state(new_crtc_state->base.state);
to_intel_atomic_state(new_crtc_state->uapi.state);
const struct intel_crtc_state *old_crtc_state =
intel_atomic_get_old_crtc_state(state, crtc);
@ -1068,19 +1068,19 @@ static bool glk_can_preload_luts(const struct intel_crtc_state *new_crtc_state)
* linear hardware degamma mid scanout.
*/
return !old_crtc_state->csc_enable &&
!old_crtc_state->base.gamma_lut;
!old_crtc_state->hw.gamma_lut;
}
int intel_color_check(struct intel_crtc_state *crtc_state)
{
struct drm_i915_private *dev_priv = to_i915(crtc_state->base.crtc->dev);
struct drm_i915_private *dev_priv = to_i915(crtc_state->uapi.crtc->dev);
return dev_priv->display.color_check(crtc_state);
}
void intel_color_get_config(struct intel_crtc_state *crtc_state)
{
struct drm_i915_private *dev_priv = to_i915(crtc_state->base.crtc->dev);
struct drm_i915_private *dev_priv = to_i915(crtc_state->uapi.crtc->dev);
if (dev_priv->display.read_luts)
dev_priv->display.read_luts(crtc_state);
@ -1104,16 +1104,16 @@ static bool need_plane_update(struct intel_plane *plane,
static int
intel_color_add_affected_planes(struct intel_crtc_state *new_crtc_state)
{
struct intel_crtc *crtc = to_intel_crtc(new_crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(new_crtc_state->uapi.crtc);
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
struct intel_atomic_state *state =
to_intel_atomic_state(new_crtc_state->base.state);
to_intel_atomic_state(new_crtc_state->uapi.state);
const struct intel_crtc_state *old_crtc_state =
intel_atomic_get_old_crtc_state(state, crtc);
struct intel_plane *plane;
if (!new_crtc_state->base.active ||
drm_atomic_crtc_needs_modeset(&new_crtc_state->base))
if (!new_crtc_state->hw.active ||
drm_atomic_crtc_needs_modeset(&new_crtc_state->uapi))
return 0;
if (new_crtc_state->gamma_enable == old_crtc_state->gamma_enable &&
@ -1155,9 +1155,9 @@ static int check_lut_size(const struct drm_property_blob *lut, int expected)
static int check_luts(const struct intel_crtc_state *crtc_state)
{
struct drm_i915_private *dev_priv = to_i915(crtc_state->base.crtc->dev);
const struct drm_property_blob *gamma_lut = crtc_state->base.gamma_lut;
const struct drm_property_blob *degamma_lut = crtc_state->base.degamma_lut;
struct drm_i915_private *dev_priv = to_i915(crtc_state->uapi.crtc->dev);
const struct drm_property_blob *gamma_lut = crtc_state->hw.gamma_lut;
const struct drm_property_blob *degamma_lut = crtc_state->hw.degamma_lut;
int gamma_length, degamma_length;
u32 gamma_tests, degamma_tests;
@ -1205,7 +1205,7 @@ static int i9xx_color_check(struct intel_crtc_state *crtc_state)
return ret;
crtc_state->gamma_enable =
crtc_state->base.gamma_lut &&
crtc_state->hw.gamma_lut &&
!crtc_state->c8_planes;
crtc_state->gamma_mode = i9xx_gamma_mode(crtc_state);
@ -1226,11 +1226,11 @@ static u32 chv_cgm_mode(const struct intel_crtc_state *crtc_state)
if (crtc_state_is_legacy_gamma(crtc_state))
return 0;
if (crtc_state->base.degamma_lut)
if (crtc_state->hw.degamma_lut)
cgm_mode |= CGM_PIPE_MODE_DEGAMMA;
if (crtc_state->base.ctm)
if (crtc_state->hw.ctm)
cgm_mode |= CGM_PIPE_MODE_CSC;
if (crtc_state->base.gamma_lut)
if (crtc_state->hw.gamma_lut)
cgm_mode |= CGM_PIPE_MODE_GAMMA;
return cgm_mode;
@ -1306,7 +1306,7 @@ static int ilk_color_check(struct intel_crtc_state *crtc_state)
return ret;
crtc_state->gamma_enable =
crtc_state->base.gamma_lut &&
crtc_state->hw.gamma_lut &&
!crtc_state->c8_planes;
/*
@ -1334,8 +1334,8 @@ static u32 ivb_gamma_mode(const struct intel_crtc_state *crtc_state)
if (!crtc_state->gamma_enable ||
crtc_state_is_legacy_gamma(crtc_state))
return GAMMA_MODE_MODE_8BIT;
else if (crtc_state->base.gamma_lut &&
crtc_state->base.degamma_lut)
else if (crtc_state->hw.gamma_lut &&
crtc_state->hw.degamma_lut)
return GAMMA_MODE_MODE_SPLIT;
else
return GAMMA_MODE_MODE_10BIT;
@ -1349,7 +1349,7 @@ static u32 ivb_csc_mode(const struct intel_crtc_state *crtc_state)
* CSC comes after the LUT in degamma, RGB->YCbCr,
* and RGB full->limited range mode.
*/
if (crtc_state->base.degamma_lut ||
if (crtc_state->hw.degamma_lut ||
crtc_state->output_format != INTEL_OUTPUT_FORMAT_RGB ||
limited_color_range)
return 0;
@ -1367,13 +1367,13 @@ static int ivb_color_check(struct intel_crtc_state *crtc_state)
return ret;
crtc_state->gamma_enable =
(crtc_state->base.gamma_lut ||
crtc_state->base.degamma_lut) &&
(crtc_state->hw.gamma_lut ||
crtc_state->hw.degamma_lut) &&
!crtc_state->c8_planes;
crtc_state->csc_enable =
crtc_state->output_format != INTEL_OUTPUT_FORMAT_RGB ||
crtc_state->base.ctm || limited_color_range;
crtc_state->hw.ctm || limited_color_range;
crtc_state->gamma_mode = ivb_gamma_mode(crtc_state);
@ -1406,14 +1406,14 @@ static int glk_color_check(struct intel_crtc_state *crtc_state)
return ret;
crtc_state->gamma_enable =
crtc_state->base.gamma_lut &&
crtc_state->hw.gamma_lut &&
!crtc_state->c8_planes;
/* On GLK+ degamma LUT is controlled by csc_enable */
crtc_state->csc_enable =
crtc_state->base.degamma_lut ||
crtc_state->hw.degamma_lut ||
crtc_state->output_format != INTEL_OUTPUT_FORMAT_RGB ||
crtc_state->base.ctm || crtc_state->limited_color_range;
crtc_state->hw.ctm || crtc_state->limited_color_range;
crtc_state->gamma_mode = glk_gamma_mode(crtc_state);
@ -1432,14 +1432,14 @@ static u32 icl_gamma_mode(const struct intel_crtc_state *crtc_state)
{
u32 gamma_mode = 0;
if (crtc_state->base.degamma_lut)
if (crtc_state->hw.degamma_lut)
gamma_mode |= PRE_CSC_GAMMA_ENABLE;
if (crtc_state->base.gamma_lut &&
if (crtc_state->hw.gamma_lut &&
!crtc_state->c8_planes)
gamma_mode |= POST_CSC_GAMMA_ENABLE;
if (!crtc_state->base.gamma_lut ||
if (!crtc_state->hw.gamma_lut ||
crtc_state_is_legacy_gamma(crtc_state))
gamma_mode |= GAMMA_MODE_MODE_8BIT;
else
@ -1452,7 +1452,7 @@ static u32 icl_csc_mode(const struct intel_crtc_state *crtc_state)
{
u32 csc_mode = 0;
if (crtc_state->base.ctm)
if (crtc_state->hw.ctm)
csc_mode |= ICL_CSC_ENABLE;
if (crtc_state->output_format != INTEL_OUTPUT_FORMAT_RGB ||
@ -1540,7 +1540,7 @@ static int glk_gamma_precision(const struct intel_crtc_state *crtc_state)
int intel_color_get_gamma_bit_precision(const struct intel_crtc_state *crtc_state)
{
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
if (HAS_GMCH(dev_priv)) {
@ -1646,7 +1646,7 @@ static u32 intel_color_lut_pack(u32 val, u32 bit_precision)
static struct drm_property_blob *
i9xx_read_lut_8(const struct intel_crtc_state *crtc_state)
{
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
enum pipe pipe = crtc->pipe;
struct drm_property_blob *blob;
@ -1683,13 +1683,13 @@ static void i9xx_read_luts(struct intel_crtc_state *crtc_state)
if (!crtc_state->gamma_enable)
return;
crtc_state->base.gamma_lut = i9xx_read_lut_8(crtc_state);
crtc_state->hw.gamma_lut = i9xx_read_lut_8(crtc_state);
}
static struct drm_property_blob *
i965_read_lut_10p6(const struct intel_crtc_state *crtc_state)
{
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
u32 lut_size = INTEL_INFO(dev_priv)->color.gamma_lut_size;
enum pipe pipe = crtc->pipe;
@ -1733,15 +1733,15 @@ static void i965_read_luts(struct intel_crtc_state *crtc_state)
return;
if (crtc_state->gamma_mode == GAMMA_MODE_MODE_8BIT)
crtc_state->base.gamma_lut = i9xx_read_lut_8(crtc_state);
crtc_state->hw.gamma_lut = i9xx_read_lut_8(crtc_state);
else
crtc_state->base.gamma_lut = i965_read_lut_10p6(crtc_state);
crtc_state->hw.gamma_lut = i965_read_lut_10p6(crtc_state);
}
static struct drm_property_blob *
chv_read_cgm_lut(const struct intel_crtc_state *crtc_state)
{
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
u32 lut_size = INTEL_INFO(dev_priv)->color.gamma_lut_size;
enum pipe pipe = crtc->pipe;
@ -1775,7 +1775,7 @@ chv_read_cgm_lut(const struct intel_crtc_state *crtc_state)
static void chv_read_luts(struct intel_crtc_state *crtc_state)
{
if (crtc_state->cgm_mode & CGM_PIPE_MODE_GAMMA)
crtc_state->base.gamma_lut = chv_read_cgm_lut(crtc_state);
crtc_state->hw.gamma_lut = chv_read_cgm_lut(crtc_state);
else
i965_read_luts(crtc_state);
}
@ -1783,7 +1783,7 @@ static void chv_read_luts(struct intel_crtc_state *crtc_state)
static struct drm_property_blob *
ilk_read_lut_10(const struct intel_crtc_state *crtc_state)
{
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
u32 lut_size = INTEL_INFO(dev_priv)->color.gamma_lut_size;
enum pipe pipe = crtc->pipe;
@ -1822,15 +1822,15 @@ static void ilk_read_luts(struct intel_crtc_state *crtc_state)
return;
if (crtc_state->gamma_mode == GAMMA_MODE_MODE_8BIT)
crtc_state->base.gamma_lut = i9xx_read_lut_8(crtc_state);
crtc_state->hw.gamma_lut = i9xx_read_lut_8(crtc_state);
else
crtc_state->base.gamma_lut = ilk_read_lut_10(crtc_state);
crtc_state->hw.gamma_lut = ilk_read_lut_10(crtc_state);
}
static struct drm_property_blob *
glk_read_lut_10(const struct intel_crtc_state *crtc_state, u32 prec_index)
{
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
int hw_lut_size = ivb_lut_10_size(prec_index);
enum pipe pipe = crtc->pipe;
@ -1871,9 +1871,9 @@ static void glk_read_luts(struct intel_crtc_state *crtc_state)
return;
if (crtc_state->gamma_mode == GAMMA_MODE_MODE_8BIT)
crtc_state->base.gamma_lut = i9xx_read_lut_8(crtc_state);
crtc_state->hw.gamma_lut = i9xx_read_lut_8(crtc_state);
else
crtc_state->base.gamma_lut = glk_read_lut_10(crtc_state, PAL_PREC_INDEX_VALUE(0));
crtc_state->hw.gamma_lut = glk_read_lut_10(crtc_state, PAL_PREC_INDEX_VALUE(0));
}
void intel_color_init(struct intel_crtc *crtc)

Просмотреть файл

@ -132,9 +132,9 @@ static void intel_crt_get_config(struct intel_encoder *encoder,
{
pipe_config->output_types |= BIT(INTEL_OUTPUT_ANALOG);
pipe_config->base.adjusted_mode.flags |= intel_crt_get_flags(encoder);
pipe_config->hw.adjusted_mode.flags |= intel_crt_get_flags(encoder);
pipe_config->base.adjusted_mode.crtc_clock = pipe_config->port_clock;
pipe_config->hw.adjusted_mode.crtc_clock = pipe_config->port_clock;
}
static void hsw_crt_get_config(struct intel_encoder *encoder,
@ -144,13 +144,13 @@ static void hsw_crt_get_config(struct intel_encoder *encoder,
intel_ddi_get_config(encoder, pipe_config);
pipe_config->base.adjusted_mode.flags &= ~(DRM_MODE_FLAG_PHSYNC |
pipe_config->hw.adjusted_mode.flags &= ~(DRM_MODE_FLAG_PHSYNC |
DRM_MODE_FLAG_NHSYNC |
DRM_MODE_FLAG_PVSYNC |
DRM_MODE_FLAG_NVSYNC);
pipe_config->base.adjusted_mode.flags |= intel_crt_get_flags(encoder);
pipe_config->hw.adjusted_mode.flags |= intel_crt_get_flags(encoder);
pipe_config->base.adjusted_mode.crtc_clock = lpt_get_iclkip(dev_priv);
pipe_config->hw.adjusted_mode.crtc_clock = lpt_get_iclkip(dev_priv);
}
/* Note: The caller is required to filter out dpms modes not supported by the
@ -161,8 +161,8 @@ static void intel_crt_set_dpms(struct intel_encoder *encoder,
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_crt *crt = intel_encoder_to_crt(encoder);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
const struct drm_display_mode *adjusted_mode = &crtc_state->base.adjusted_mode;
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
const struct drm_display_mode *adjusted_mode = &crtc_state->hw.adjusted_mode;
u32 adpa;
if (INTEL_GEN(dev_priv) >= 5)
@ -241,6 +241,14 @@ static void hsw_post_disable_crt(struct intel_encoder *encoder,
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
intel_crtc_vblank_off(old_crtc_state);
intel_disable_pipe(old_crtc_state);
intel_ddi_disable_transcoder_func(old_crtc_state);
ironlake_pfit_disable(old_crtc_state);
intel_ddi_disable_pipe_clock(old_crtc_state);
pch_post_disable_crt(encoder, old_crtc_state, old_conn_state);
@ -271,14 +279,14 @@ static void hsw_pre_enable_crt(struct intel_encoder *encoder,
const struct drm_connector_state *conn_state)
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
enum pipe pipe = crtc->pipe;
WARN_ON(!crtc_state->has_pch_encoder);
intel_set_cpu_fifo_underrun_reporting(dev_priv, pipe, false);
dev_priv->display.fdi_link_train(crtc, crtc_state);
hsw_fdi_link_train(encoder, crtc_state);
intel_ddi_enable_pipe_clock(crtc_state);
}
@ -288,7 +296,7 @@ static void hsw_enable_crt(struct intel_encoder *encoder,
const struct drm_connector_state *conn_state)
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
enum pipe pipe = crtc->pipe;
WARN_ON(!crtc_state->has_pch_encoder);
@ -358,7 +366,7 @@ static int intel_crt_compute_config(struct intel_encoder *encoder,
struct drm_connector_state *conn_state)
{
struct drm_display_mode *adjusted_mode =
&pipe_config->base.adjusted_mode;
&pipe_config->hw.adjusted_mode;
if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN)
return -EINVAL;
@ -373,7 +381,7 @@ static int pch_crt_compute_config(struct intel_encoder *encoder,
struct drm_connector_state *conn_state)
{
struct drm_display_mode *adjusted_mode =
&pipe_config->base.adjusted_mode;
&pipe_config->hw.adjusted_mode;
if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN)
return -EINVAL;
@ -390,7 +398,7 @@ static int hsw_crt_compute_config(struct intel_encoder *encoder,
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct drm_display_mode *adjusted_mode =
&pipe_config->base.adjusted_mode;
&pipe_config->hw.adjusted_mode;
if (adjusted_mode->flags & DRM_MODE_FLAG_DBLSCAN)
return -EINVAL;

Просмотреть файл

@ -902,11 +902,10 @@ icl_get_combo_buf_trans(struct drm_i915_private *dev_priv, int type, int rate,
static int intel_ddi_hdmi_level(struct drm_i915_private *dev_priv, enum port port)
{
struct ddi_vbt_port_info *port_info = &dev_priv->vbt.ddi_port_info[port];
int n_entries, level, default_entry;
enum phy phy = intel_port_to_phy(dev_priv, port);
level = dev_priv->vbt.ddi_port_info[port].hdmi_level_shift;
if (INTEL_GEN(dev_priv) >= 12) {
if (intel_phy_is_combo(dev_priv, phy))
icl_get_combo_buf_trans(dev_priv, INTEL_OUTPUT_HDMI,
@ -941,12 +940,14 @@ static int intel_ddi_hdmi_level(struct drm_i915_private *dev_priv, enum port por
return 0;
}
/* Choose a good default if VBT is badly populated */
if (level == HDMI_LEVEL_SHIFT_UNKNOWN || level >= n_entries)
level = default_entry;
if (WARN_ON_ONCE(n_entries == 0))
return 0;
if (port_info->hdmi_level_shift_set)
level = port_info->hdmi_level_shift;
else
level = default_entry;
if (WARN_ON_ONCE(level >= n_entries))
level = n_entries - 1;
@ -1106,18 +1107,14 @@ static u32 icl_pll_to_ddi_clk_sel(struct intel_encoder *encoder,
* DDI A (which is used for eDP)
*/
void hsw_fdi_link_train(struct intel_crtc *crtc,
void hsw_fdi_link_train(struct intel_encoder *encoder,
const struct intel_crtc_state *crtc_state)
{
struct drm_device *dev = crtc->base.dev;
struct drm_i915_private *dev_priv = to_i915(dev);
struct intel_encoder *encoder;
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
u32 temp, i, rx_ctl_val, ddi_pll_sel;
for_each_encoder_on_crtc(dev, &crtc->base, encoder) {
WARN_ON(encoder->type != INTEL_OUTPUT_ANALOG);
intel_prepare_dp_ddi_buffers(encoder, crtc_state);
}
intel_prepare_dp_ddi_buffers(encoder, crtc_state);
/* Set the FDI_RX_MISC pwrdn lanes and the 2 workarounds listed at the
* mode set "sequence for CRT port" document:
@ -1542,7 +1539,7 @@ static void ddi_dotclock_get(struct intel_crtc_state *pipe_config)
if (pipe_config->pixel_multiplier)
dotclock /= pipe_config->pixel_multiplier;
pipe_config->base.adjusted_mode.crtc_clock = dotclock;
pipe_config->hw.adjusted_mode.crtc_clock = dotclock;
}
static void icl_ddi_clock_get(struct intel_encoder *encoder,
@ -1758,7 +1755,7 @@ static void intel_ddi_clock_get(struct intel_encoder *encoder,
void intel_ddi_set_dp_msa(const struct intel_crtc_state *crtc_state,
const struct drm_connector_state *conn_state)
{
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
enum transcoder cpu_transcoder = crtc_state->cpu_transcoder;
u32 temp;
@ -1815,22 +1812,6 @@ void intel_ddi_set_dp_msa(const struct intel_crtc_state *crtc_state,
I915_WRITE(TRANS_MSA_MISC(cpu_transcoder), temp);
}
void intel_ddi_set_vc_payload_alloc(const struct intel_crtc_state *crtc_state,
bool state)
{
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
enum transcoder cpu_transcoder = crtc_state->cpu_transcoder;
u32 temp;
temp = I915_READ(TRANS_DDI_FUNC_CTL(cpu_transcoder));
if (state == true)
temp |= TRANS_DDI_DP_VC_PAYLOAD_ALLOC;
else
temp &= ~TRANS_DDI_DP_VC_PAYLOAD_ALLOC;
I915_WRITE(TRANS_DDI_FUNC_CTL(cpu_transcoder), temp);
}
/*
* Returns the TRANS_DDI_FUNC_CTL value based on CRTC state.
*
@ -1840,7 +1821,7 @@ void intel_ddi_set_vc_payload_alloc(const struct intel_crtc_state *crtc_state,
static u32
intel_ddi_transcoder_func_reg_val_get(const struct intel_crtc_state *crtc_state)
{
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
struct intel_encoder *encoder = intel_ddi_get_crtc_encoder(crtc);
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
enum pipe pipe = crtc->pipe;
@ -1872,9 +1853,9 @@ intel_ddi_transcoder_func_reg_val_get(const struct intel_crtc_state *crtc_state)
BUG();
}
if (crtc_state->base.adjusted_mode.flags & DRM_MODE_FLAG_PVSYNC)
if (crtc_state->hw.adjusted_mode.flags & DRM_MODE_FLAG_PVSYNC)
temp |= TRANS_DDI_PVSYNC;
if (crtc_state->base.adjusted_mode.flags & DRM_MODE_FLAG_PHSYNC)
if (crtc_state->hw.adjusted_mode.flags & DRM_MODE_FLAG_PHSYNC)
temp |= TRANS_DDI_PHSYNC;
if (cpu_transcoder == TRANSCODER_EDP) {
@ -1930,12 +1911,14 @@ intel_ddi_transcoder_func_reg_val_get(const struct intel_crtc_state *crtc_state)
void intel_ddi_enable_transcoder_func(const struct intel_crtc_state *crtc_state)
{
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
enum transcoder cpu_transcoder = crtc_state->cpu_transcoder;
u32 temp;
temp = intel_ddi_transcoder_func_reg_val_get(crtc_state);
if (intel_crtc_has_type(crtc_state, INTEL_OUTPUT_DP_MST))
temp |= TRANS_DDI_DP_VC_PAYLOAD_ALLOC;
I915_WRITE(TRANS_DDI_FUNC_CTL(cpu_transcoder), temp);
}
@ -1946,7 +1929,7 @@ void intel_ddi_enable_transcoder_func(const struct intel_crtc_state *crtc_state)
static void
intel_ddi_config_transcoder_func(const struct intel_crtc_state *crtc_state)
{
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
enum transcoder cpu_transcoder = crtc_state->cpu_transcoder;
u32 temp;
@ -1958,7 +1941,7 @@ intel_ddi_config_transcoder_func(const struct intel_crtc_state *crtc_state)
void intel_ddi_disable_transcoder_func(const struct intel_crtc_state *crtc_state)
{
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
enum transcoder cpu_transcoder = crtc_state->cpu_transcoder;
i915_reg_t reg = TRANS_DDI_FUNC_CTL(cpu_transcoder);
@ -2256,7 +2239,7 @@ static void intel_ddi_get_power_domains(struct intel_encoder *encoder,
void intel_ddi_enable_pipe_clock(const struct intel_crtc_state *crtc_state)
{
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
struct intel_encoder *encoder = intel_ddi_get_crtc_encoder(crtc);
enum port port = encoder->port;
@ -2274,7 +2257,7 @@ void intel_ddi_enable_pipe_clock(const struct intel_crtc_state *crtc_state)
void intel_ddi_disable_pipe_clock(const struct intel_crtc_state *crtc_state)
{
struct drm_i915_private *dev_priv = to_i915(crtc_state->base.crtc->dev);
struct drm_i915_private *dev_priv = to_i915(crtc_state->uapi.crtc->dev);
enum transcoder cpu_transcoder = crtc_state->cpu_transcoder;
if (cpu_transcoder != TRANSCODER_EDP) {
@ -3016,11 +2999,38 @@ static void icl_unmap_plls_to_ports(struct intel_encoder *encoder)
mutex_unlock(&dev_priv->dpll_lock);
}
static void icl_sanitize_port_clk_off(struct drm_i915_private *dev_priv,
u32 port_mask, bool ddi_clk_needed)
{
enum port port;
u32 val;
val = I915_READ(ICL_DPCLKA_CFGCR0);
for_each_port_masked(port, port_mask) {
enum phy phy = intel_port_to_phy(dev_priv, port);
bool ddi_clk_off = val & icl_dpclka_cfgcr0_clk_off(dev_priv,
phy);
if (ddi_clk_needed == !ddi_clk_off)
continue;
/*
* Punt on the case now where clock is gated, but it would
* be needed by the port. Something else is really broken then.
*/
if (WARN_ON(ddi_clk_needed))
continue;
DRM_NOTE("PHY %c is disabled/in DSI mode with an ungated DDI clock, gate it\n",
phy_name(phy));
val |= icl_dpclka_cfgcr0_clk_off(dev_priv, phy);
I915_WRITE(ICL_DPCLKA_CFGCR0, val);
}
}
void icl_sanitize_encoder_pll_mapping(struct intel_encoder *encoder)
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
u32 val;
enum port port;
u32 port_mask;
bool ddi_clk_needed;
@ -3069,29 +3079,7 @@ void icl_sanitize_encoder_pll_mapping(struct intel_encoder *encoder)
ddi_clk_needed = false;
}
val = I915_READ(ICL_DPCLKA_CFGCR0);
for_each_port_masked(port, port_mask) {
enum phy phy = intel_port_to_phy(dev_priv, port);
bool ddi_clk_ungated = !(val &
icl_dpclka_cfgcr0_clk_off(dev_priv,
phy));
if (ddi_clk_needed == ddi_clk_ungated)
continue;
/*
* Punt on the case now where clock is gated, but it would
* be needed by the port. Something else is really broken then.
*/
if (WARN_ON(ddi_clk_needed))
continue;
DRM_NOTE("PHY %c is disabled/in DSI mode with an ungated DDI clock, gate it\n",
phy_name(port));
val |= icl_dpclka_cfgcr0_clk_off(dev_priv, phy);
I915_WRITE(ICL_DPCLKA_CFGCR0, val);
}
icl_sanitize_port_clk_off(dev_priv, port_mask, ddi_clk_needed);
}
static void intel_ddi_clk_select(struct intel_encoder *encoder,
@ -3359,7 +3347,7 @@ static void intel_ddi_disable_fec_state(struct intel_encoder *encoder,
static void
tgl_clear_psr2_transcoder_exitline(const struct intel_crtc_state *cstate)
{
struct drm_i915_private *dev_priv = to_i915(cstate->base.crtc->dev);
struct drm_i915_private *dev_priv = to_i915(cstate->uapi.crtc->dev);
u32 val;
if (!cstate->dc3co_exitline)
@ -3374,7 +3362,7 @@ static void
tgl_set_psr2_transcoder_exitline(const struct intel_crtc_state *cstate)
{
u32 val, exit_scanlines;
struct drm_i915_private *dev_priv = to_i915(cstate->base.crtc->dev);
struct drm_i915_private *dev_priv = to_i915(cstate->uapi.crtc->dev);
if (!cstate->dc3co_exitline)
return;
@ -3392,8 +3380,8 @@ static void tgl_dc3co_exitline_compute_config(struct intel_encoder *encoder,
struct intel_crtc_state *cstate)
{
u32 exit_scanlines;
struct drm_i915_private *dev_priv = to_i915(cstate->base.crtc->dev);
u32 crtc_vdisplay = cstate->base.adjusted_mode.crtc_vdisplay;
struct drm_i915_private *dev_priv = to_i915(cstate->uapi.crtc->dev);
u32 crtc_vdisplay = cstate->hw.adjusted_mode.crtc_vdisplay;
cstate->dc3co_exitline = 0;
@ -3401,11 +3389,11 @@ static void tgl_dc3co_exitline_compute_config(struct intel_encoder *encoder,
return;
/* B.Specs:49196 DC3CO only works with pipeA and DDIA.*/
if (to_intel_crtc(cstate->base.crtc)->pipe != PIPE_A ||
if (to_intel_crtc(cstate->uapi.crtc)->pipe != PIPE_A ||
encoder->port != PORT_A)
return;
if (!cstate->has_psr2 || !cstate->base.active)
if (!cstate->has_psr2 || !cstate->hw.active)
return;
/*
@ -3413,7 +3401,7 @@ static void tgl_dc3co_exitline_compute_config(struct intel_encoder *encoder,
* PSR2 transcoder Early Exit scanlines = ROUNDUP(200 / line time) + 1
*/
exit_scanlines =
intel_usecs_to_scanlines(&cstate->base.adjusted_mode, 200) + 1;
intel_usecs_to_scanlines(&cstate->hw.adjusted_mode, 200) + 1;
if (WARN_ON(exit_scanlines > crtc_vdisplay))
return;
@ -3425,7 +3413,7 @@ static void tgl_dc3co_exitline_compute_config(struct intel_encoder *encoder,
static void tgl_dc3co_exitline_get_config(struct intel_crtc_state *crtc_state)
{
u32 val;
struct drm_i915_private *dev_priv = to_i915(crtc_state->base.crtc->dev);
struct drm_i915_private *dev_priv = to_i915(crtc_state->uapi.crtc->dev);
if (INTEL_GEN(dev_priv) < 12)
return;
@ -3455,47 +3443,86 @@ static void tgl_ddi_pre_enable_dp(struct intel_encoder *encoder,
intel_dp->regs.dp_tp_ctl = TGL_DP_TP_CTL(transcoder);
intel_dp->regs.dp_tp_status = TGL_DP_TP_STATUS(transcoder);
/* 1.a got on intel_atomic_commit_tail() */
/*
* 1. Enable Power Wells
*
* This was handled at the beginning of intel_atomic_commit_tail(),
* before we called down into this function.
*/
/* 2. */
/* 2. Enable Panel Power if PPS is required */
intel_edp_panel_on(intel_dp);
/*
* 1.b, 3. and 4.a is done before tgl_ddi_pre_enable_dp() by:
* haswell_crtc_enable()->intel_encoders_pre_pll_enable() and
* haswell_crtc_enable()->intel_enable_shared_dpll()
* 3. For non-TBT Type-C ports, set FIA lane count
* (DFLEXDPSP.DPX4TXLATC)
*
* This was done before tgl_ddi_pre_enable_dp by
* haswell_crtc_enable()->intel_encoders_pre_pll_enable().
*/
/* 4.b */
/*
* 4. Enable the port PLL.
*
* The PLL enabling itself was already done before this function by
* haswell_crtc_enable()->intel_enable_shared_dpll(). We need only
* configure the PLL to port mapping here.
*/
intel_ddi_clk_select(encoder, crtc_state);
/* 5. */
/* 5. If IO power is controlled through PWR_WELL_CTL, Enable IO Power */
if (!intel_phy_is_tc(dev_priv, phy) ||
dig_port->tc_mode != TC_PORT_TBT_ALT)
intel_display_power_get(dev_priv,
dig_port->ddi_io_power_domain);
/* 6. */
/* 6. Program DP_MODE */
icl_program_mg_dp_mode(dig_port, crtc_state);
/*
* 7.a - Steps in this function should only be executed over MST
* master, what will be taken in care by MST hook
* intel_mst_pre_enable_dp()
* 7. The rest of the below are substeps under the bspec's "Enable and
* Train Display Port" step. Note that steps that are specific to
* MST will be handled by intel_mst_pre_enable_dp() before/after it
* calls into this function. Also intel_mst_pre_enable_dp() only calls
* us when active_mst_links==0, so any steps designated for "single
* stream or multi-stream master transcoder" can just be performed
* unconditionally here.
*/
/*
* 7.a Configure Transcoder Clock Select to direct the Port clock to the
* Transcoder.
*/
intel_ddi_enable_pipe_clock(crtc_state);
/* 7.b */
/*
* 7.b Configure TRANS_DDI_FUNC_CTL DDI Select, DDI Mode Select & MST
* Transport Select
*/
intel_ddi_config_transcoder_func(crtc_state);
/* 7.d */
/*
* 7.c Configure & enable DP_TP_CTL with link training pattern 1
* selected
*
* This will be handled by the intel_dp_start_link_train() farther
* down this function.
*/
/*
* 7.d Type C with DP alternate or fixed/legacy/static connection -
* Disable PHY clock gating per Type-C DDI Buffer page
*/
icl_phy_set_clock_gating(dig_port, false);
/* 7.e */
/* 7.e Configure voltage swing and related IO settings */
tgl_ddi_vswing_sequence(encoder, crtc_state->port_clock, level,
encoder->type);
/* 7.f */
/*
* 7.f Combo PHY: Configure PORT_CL_DW10 Static Power Down to power up
* the used lanes of the DDI.
*/
if (intel_phy_is_combo(dev_priv, phy)) {
bool lane_reversal =
dig_port->saved_port_bits & DDI_BUF_PORT_REVERSAL;
@ -3505,7 +3532,14 @@ static void tgl_ddi_pre_enable_dp(struct intel_encoder *encoder,
lane_reversal);
}
/* 7.g */
/*
* 7.g Configure and enable DDI_BUF_CTL
* 7.h Wait for DDI_BUF_CTL DDI Idle Status = 0b (Not Idle), timeout
* after 500 us.
*
* We only configure what the register value will be here. Actual
* enabling happens during link training farther down.
*/
intel_ddi_init_dp_buf_reg(encoder);
if (!is_mst)
@ -3518,10 +3552,17 @@ static void tgl_ddi_pre_enable_dp(struct intel_encoder *encoder,
* training
*/
intel_dp_sink_set_fec_ready(intel_dp, crtc_state);
/* 7.c, 7.h, 7.i, 7.j */
/*
* 7.i Follow DisplayPort specification training sequence (see notes for
* failure handling)
* 7.j If DisplayPort multi-stream - Set DP_TP_CTL link training to Idle
* Pattern, wait for 5 idle patterns (DP_TP_STATUS Min_Idles_Sent)
* (timeout after 800 us)
*/
intel_dp_start_link_train(intel_dp);
/* 7.k */
/* 7.k Set DP_TP_CTL link training to Normal */
if (!is_trans_port_sync_mode(crtc_state))
intel_dp_stop_link_train(intel_dp);
@ -3534,7 +3575,7 @@ static void tgl_ddi_pre_enable_dp(struct intel_encoder *encoder,
* so not enabling it for now.
*/
/* 7.l */
/* 7.l Configure and enable FEC if needed */
intel_ddi_enable_fec(encoder, crtc_state);
intel_dsc_enable(encoder, crtc_state);
}
@ -3677,7 +3718,7 @@ static void intel_ddi_pre_enable(struct intel_encoder *encoder,
const struct intel_crtc_state *crtc_state,
const struct drm_connector_state *conn_state)
{
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
enum pipe pipe = crtc->pipe;
@ -3761,17 +3802,25 @@ static void intel_ddi_post_disable_dp(struct intel_encoder *encoder,
INTEL_OUTPUT_DP_MST);
enum phy phy = intel_port_to_phy(dev_priv, encoder->port);
if (!is_mst) {
/*
* Power down sink before disabling the port, otherwise we end
* up getting interrupts from the sink on detecting link loss.
*/
intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_OFF);
if (INTEL_GEN(dev_priv) < 12 && !is_mst)
intel_ddi_disable_pipe_clock(old_crtc_state);
/*
* Power down sink before disabling the port, otherwise we end
* up getting interrupts from the sink on detecting link loss.
*/
intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_OFF);
}
intel_disable_ddi_buf(encoder, old_crtc_state);
/*
* From TGL spec: "If single stream or multi-stream master transcoder:
* Configure Transcoder Clock select to direct no clock to the
* transcoder"
*/
if (INTEL_GEN(dev_priv) >= 12)
intel_ddi_disable_pipe_clock(old_crtc_state);
intel_edp_panel_vdd_on(intel_dp);
intel_edp_panel_off(intel_dp);
@ -3807,11 +3856,49 @@ static void intel_ddi_post_disable_hdmi(struct intel_encoder *encoder,
intel_dp_dual_mode_set_tmds_output(intel_hdmi, false);
}
static void icl_disable_transcoder_port_sync(const struct intel_crtc_state *old_crtc_state)
{
struct intel_crtc *crtc = to_intel_crtc(old_crtc_state->uapi.crtc);
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
i915_reg_t reg;
u32 trans_ddi_func_ctl2_val;
if (old_crtc_state->master_transcoder == INVALID_TRANSCODER)
return;
DRM_DEBUG_KMS("Disabling Transcoder Port Sync on Slave Transcoder %s\n",
transcoder_name(old_crtc_state->cpu_transcoder));
reg = TRANS_DDI_FUNC_CTL2(old_crtc_state->cpu_transcoder);
trans_ddi_func_ctl2_val = ~(PORT_SYNC_MODE_ENABLE |
PORT_SYNC_MODE_MASTER_SELECT_MASK);
I915_WRITE(reg, trans_ddi_func_ctl2_val);
}
static void intel_ddi_post_disable(struct intel_encoder *encoder,
const struct intel_crtc_state *old_crtc_state,
const struct drm_connector_state *old_conn_state)
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_digital_port *dig_port = enc_to_dig_port(&encoder->base);
enum phy phy = intel_port_to_phy(dev_priv, encoder->port);
bool is_tc_port = intel_phy_is_tc(dev_priv, phy);
intel_crtc_vblank_off(old_crtc_state);
intel_disable_pipe(old_crtc_state);
if (INTEL_GEN(dev_priv) >= 11)
icl_disable_transcoder_port_sync(old_crtc_state);
intel_ddi_disable_transcoder_func(old_crtc_state);
intel_dsc_disable(old_crtc_state);
if (INTEL_GEN(dev_priv) >= 9)
skylake_scaler_disable(old_crtc_state);
else
ironlake_pfit_disable(old_crtc_state);
/*
* When called from DP MST code:
@ -3835,6 +3922,13 @@ static void intel_ddi_post_disable(struct intel_encoder *encoder,
if (INTEL_GEN(dev_priv) >= 11)
icl_unmap_plls_to_ports(encoder);
if (intel_crtc_has_dp_encoder(old_crtc_state) || is_tc_port)
intel_display_power_put_unchecked(dev_priv,
intel_ddi_main_link_aux_domain(dig_port));
if (is_tc_port)
intel_tc_port_put_link(dig_port);
}
void intel_ddi_fdi_post_disable(struct intel_encoder *encoder,
@ -4107,7 +4201,7 @@ intel_ddi_update_prepare(struct intel_atomic_state *state,
WARN_ON(crtc && crtc->active);
intel_tc_port_get_link(enc_to_dig_port(&encoder->base), required_lanes);
if (crtc_state && crtc_state->base.active)
if (crtc_state && crtc_state->hw.active)
intel_update_active_dpll(state, crtc, encoder);
}
@ -4147,61 +4241,44 @@ intel_ddi_pre_pll_enable(struct intel_encoder *encoder,
crtc_state->lane_lat_optim_mask);
}
static void
intel_ddi_post_pll_disable(struct intel_encoder *encoder,
const struct intel_crtc_state *crtc_state,
const struct drm_connector_state *conn_state)
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_digital_port *dig_port = enc_to_dig_port(&encoder->base);
enum phy phy = intel_port_to_phy(dev_priv, encoder->port);
bool is_tc_port = intel_phy_is_tc(dev_priv, phy);
if (intel_crtc_has_dp_encoder(crtc_state) || is_tc_port)
intel_display_power_put_unchecked(dev_priv,
intel_ddi_main_link_aux_domain(dig_port));
if (is_tc_port)
intel_tc_port_put_link(dig_port);
}
static void intel_ddi_prepare_link_retrain(struct intel_dp *intel_dp)
{
struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
struct drm_i915_private *dev_priv =
to_i915(intel_dig_port->base.base.dev);
enum port port = intel_dig_port->base.port;
u32 val;
u32 dp_tp_ctl, ddi_buf_ctl;
bool wait = false;
if (I915_READ(intel_dp->regs.dp_tp_ctl) & DP_TP_CTL_ENABLE) {
val = I915_READ(DDI_BUF_CTL(port));
if (val & DDI_BUF_CTL_ENABLE) {
val &= ~DDI_BUF_CTL_ENABLE;
I915_WRITE(DDI_BUF_CTL(port), val);
dp_tp_ctl = I915_READ(intel_dp->regs.dp_tp_ctl);
if (dp_tp_ctl & DP_TP_CTL_ENABLE) {
ddi_buf_ctl = I915_READ(DDI_BUF_CTL(port));
if (ddi_buf_ctl & DDI_BUF_CTL_ENABLE) {
I915_WRITE(DDI_BUF_CTL(port),
ddi_buf_ctl & ~DDI_BUF_CTL_ENABLE);
wait = true;
}
val = I915_READ(intel_dp->regs.dp_tp_ctl);
val &= ~(DP_TP_CTL_ENABLE | DP_TP_CTL_LINK_TRAIN_MASK);
val |= DP_TP_CTL_LINK_TRAIN_PAT1;
I915_WRITE(intel_dp->regs.dp_tp_ctl, val);
dp_tp_ctl &= ~(DP_TP_CTL_ENABLE | DP_TP_CTL_LINK_TRAIN_MASK);
dp_tp_ctl |= DP_TP_CTL_LINK_TRAIN_PAT1;
I915_WRITE(intel_dp->regs.dp_tp_ctl, dp_tp_ctl);
POSTING_READ(intel_dp->regs.dp_tp_ctl);
if (wait)
intel_wait_ddi_buf_idle(dev_priv, port);
}
val = DP_TP_CTL_ENABLE |
DP_TP_CTL_LINK_TRAIN_PAT1 | DP_TP_CTL_SCRAMBLE_DISABLE;
dp_tp_ctl = DP_TP_CTL_ENABLE |
DP_TP_CTL_LINK_TRAIN_PAT1 | DP_TP_CTL_SCRAMBLE_DISABLE;
if (intel_dp->link_mst)
val |= DP_TP_CTL_MODE_MST;
dp_tp_ctl |= DP_TP_CTL_MODE_MST;
else {
val |= DP_TP_CTL_MODE_SST;
dp_tp_ctl |= DP_TP_CTL_MODE_SST;
if (drm_dp_enhanced_frame_cap(intel_dp->dpcd))
val |= DP_TP_CTL_ENHANCED_FRAME_ENABLE;
dp_tp_ctl |= DP_TP_CTL_ENHANCED_FRAME_ENABLE;
}
I915_WRITE(intel_dp->regs.dp_tp_ctl, val);
I915_WRITE(intel_dp->regs.dp_tp_ctl, dp_tp_ctl);
POSTING_READ(intel_dp->regs.dp_tp_ctl);
intel_dp->DP |= DDI_BUF_CTL_ENABLE;
@ -4237,7 +4314,7 @@ void intel_ddi_get_config(struct intel_encoder *encoder,
struct intel_crtc_state *pipe_config)
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_crtc *intel_crtc = to_intel_crtc(pipe_config->base.crtc);
struct intel_crtc *intel_crtc = to_intel_crtc(pipe_config->uapi.crtc);
enum transcoder cpu_transcoder = pipe_config->cpu_transcoder;
u32 temp, flags = 0;
@ -4245,6 +4322,8 @@ void intel_ddi_get_config(struct intel_encoder *encoder,
if (WARN_ON(transcoder_is_dsi(cpu_transcoder)))
return;
intel_dsc_get_config(encoder, pipe_config);
temp = I915_READ(TRANS_DDI_FUNC_CTL(cpu_transcoder));
if (temp & TRANS_DDI_PHSYNC)
flags |= DRM_MODE_FLAG_PHSYNC;
@ -4255,7 +4334,7 @@ void intel_ddi_get_config(struct intel_encoder *encoder,
else
flags |= DRM_MODE_FLAG_NVSYNC;
pipe_config->base.adjusted_mode.flags |= flags;
pipe_config->hw.adjusted_mode.flags |= flags;
switch (temp & TRANS_DDI_BPC_MASK) {
case TRANS_DDI_BPC_6:
@ -4404,7 +4483,7 @@ static int intel_ddi_compute_config(struct intel_encoder *encoder,
struct intel_crtc_state *pipe_config,
struct drm_connector_state *conn_state)
{
struct intel_crtc *crtc = to_intel_crtc(pipe_config->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(pipe_config->uapi.crtc);
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
enum port port = encoder->port;
int ret;
@ -4538,7 +4617,7 @@ static int intel_hdmi_reset_link(struct intel_encoder *encoder,
WARN_ON(!intel_crtc_has_type(crtc_state, INTEL_OUTPUT_HDMI));
if (!crtc_state->base.active)
if (!crtc_state->hw.active)
return 0;
if (!crtc_state->hdmi_high_tmds_clock_ratio &&
@ -4709,8 +4788,7 @@ void intel_ddi_init(struct drm_i915_private *dev_priv, enum port port)
struct ddi_vbt_port_info *port_info =
&dev_priv->vbt.ddi_port_info[port];
struct intel_digital_port *intel_dig_port;
struct intel_encoder *intel_encoder;
struct drm_encoder *encoder;
struct intel_encoder *encoder;
bool init_hdmi, init_dp, init_lspcon = false;
enum phy phy = intel_port_to_phy(dev_priv, port);
@ -4739,31 +4817,30 @@ void intel_ddi_init(struct drm_i915_private *dev_priv, enum port port)
if (!intel_dig_port)
return;
intel_encoder = &intel_dig_port->base;
encoder = &intel_encoder->base;
encoder = &intel_dig_port->base;
drm_encoder_init(&dev_priv->drm, encoder, &intel_ddi_funcs,
drm_encoder_init(&dev_priv->drm, &encoder->base, &intel_ddi_funcs,
DRM_MODE_ENCODER_TMDS, "DDI %c", port_name(port));
intel_encoder->hotplug = intel_ddi_hotplug;
intel_encoder->compute_output_type = intel_ddi_compute_output_type;
intel_encoder->compute_config = intel_ddi_compute_config;
intel_encoder->enable = intel_enable_ddi;
intel_encoder->pre_pll_enable = intel_ddi_pre_pll_enable;
intel_encoder->post_pll_disable = intel_ddi_post_pll_disable;
intel_encoder->pre_enable = intel_ddi_pre_enable;
intel_encoder->disable = intel_disable_ddi;
intel_encoder->post_disable = intel_ddi_post_disable;
intel_encoder->update_pipe = intel_ddi_update_pipe;
intel_encoder->get_hw_state = intel_ddi_get_hw_state;
intel_encoder->get_config = intel_ddi_get_config;
intel_encoder->suspend = intel_dp_encoder_suspend;
intel_encoder->get_power_domains = intel_ddi_get_power_domains;
intel_encoder->type = INTEL_OUTPUT_DDI;
intel_encoder->power_domain = intel_port_to_power_domain(port);
intel_encoder->port = port;
intel_encoder->cloneable = 0;
intel_encoder->pipe_mask = ~0;
encoder->hotplug = intel_ddi_hotplug;
encoder->compute_output_type = intel_ddi_compute_output_type;
encoder->compute_config = intel_ddi_compute_config;
encoder->enable = intel_enable_ddi;
encoder->pre_pll_enable = intel_ddi_pre_pll_enable;
encoder->pre_enable = intel_ddi_pre_enable;
encoder->disable = intel_disable_ddi;
encoder->post_disable = intel_ddi_post_disable;
encoder->update_pipe = intel_ddi_update_pipe;
encoder->get_hw_state = intel_ddi_get_hw_state;
encoder->get_config = intel_ddi_get_config;
encoder->suspend = intel_dp_encoder_suspend;
encoder->get_power_domains = intel_ddi_get_power_domains;
encoder->type = INTEL_OUTPUT_DDI;
encoder->power_domain = intel_port_to_power_domain(port);
encoder->port = port;
encoder->cloneable = 0;
encoder->pipe_mask = ~0;
if (INTEL_GEN(dev_priv) >= 11)
intel_dig_port->saved_port_bits = I915_READ(DDI_BUF_CTL(port)) &
@ -4771,6 +4848,7 @@ void intel_ddi_init(struct drm_i915_private *dev_priv, enum port port)
else
intel_dig_port->saved_port_bits = I915_READ(DDI_BUF_CTL(port)) &
(DDI_BUF_PORT_REVERSAL | DDI_A_4_LANES);
intel_dig_port->dp.output_reg = INVALID_MMIO_REG;
intel_dig_port->max_lanes = intel_ddi_max_lanes(intel_dig_port);
intel_dig_port->aux_ch = intel_bios_port_aux_ch(dev_priv, port);
@ -4781,8 +4859,8 @@ void intel_ddi_init(struct drm_i915_private *dev_priv, enum port port)
intel_tc_port_init(intel_dig_port, is_legacy);
intel_encoder->update_prepare = intel_ddi_update_prepare;
intel_encoder->update_complete = intel_ddi_update_complete;
encoder->update_prepare = intel_ddi_update_prepare;
encoder->update_complete = intel_ddi_update_complete;
}
WARN_ON(port > PORT_I);
@ -4798,7 +4876,7 @@ void intel_ddi_init(struct drm_i915_private *dev_priv, enum port port)
/* In theory we don't need the encoder->type check, but leave it just in
* case we have some really bad VBTs... */
if (intel_encoder->type != INTEL_OUTPUT_EDP && init_hdmi) {
if (encoder->type != INTEL_OUTPUT_EDP && init_hdmi) {
if (!intel_ddi_init_hdmi_connector(intel_dig_port))
goto err;
}
@ -4822,6 +4900,6 @@ void intel_ddi_init(struct drm_i915_private *dev_priv, enum port port)
return;
err:
drm_encoder_cleanup(encoder);
drm_encoder_cleanup(&encoder->base);
kfree(intel_dig_port);
}

Просмотреть файл

@ -22,7 +22,7 @@ struct intel_encoder;
void intel_ddi_fdi_post_disable(struct intel_encoder *intel_encoder,
const struct intel_crtc_state *old_crtc_state,
const struct drm_connector_state *old_conn_state);
void hsw_fdi_link_train(struct intel_crtc *crtc,
void hsw_fdi_link_train(struct intel_encoder *encoder,
const struct intel_crtc_state *crtc_state);
void intel_ddi_init(struct drm_i915_private *dev_priv, enum port port);
bool intel_ddi_get_hw_state(struct intel_encoder *encoder, enum pipe *pipe);

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -332,8 +332,11 @@ enum phy_fia {
(__s) < RUNTIME_INFO(__dev_priv)->num_sprites[(__p)]; \
(__s)++)
#define for_each_port_masked(__port, __ports_mask) \
for ((__port) = PORT_A; (__port) < I915_MAX_PORTS; (__port)++) \
#define for_each_port(__port) \
for ((__port) = PORT_A; (__port) < I915_MAX_PORTS; (__port)++)
#define for_each_port_masked(__port, __ports_mask) \
for_each_port(__port) \
for_each_if((__ports_mask) & BIT(__port))
#define for_each_phy_masked(__phy, __phys_mask) \
@ -377,6 +380,13 @@ enum phy_fia {
&(dev)->mode_config.encoder_list, \
base.head)
#define for_each_intel_encoder_mask(dev, intel_encoder, encoder_mask) \
list_for_each_entry(intel_encoder, \
&(dev)->mode_config.encoder_list, \
base.head) \
for_each_if((encoder_mask) & \
drm_encoder_mask(&intel_encoder->base))
#define for_each_intel_dp(dev, intel_encoder) \
for_each_intel_encoder(dev, intel_encoder) \
for_each_if(intel_encoder_is_dp(intel_encoder))
@ -446,10 +456,18 @@ enum phy_fia {
#define intel_atomic_crtc_state_for_each_plane_state( \
plane, plane_state, \
crtc_state) \
for_each_intel_plane_mask(((crtc_state)->base.state->dev), (plane), \
((crtc_state)->base.plane_mask)) \
for_each_intel_plane_mask(((crtc_state)->uapi.state->dev), (plane), \
((crtc_state)->uapi.plane_mask)) \
for_each_if ((plane_state = \
to_intel_plane_state(__drm_atomic_get_current_plane_state((crtc_state)->base.state, &plane->base))))
to_intel_plane_state(__drm_atomic_get_current_plane_state((crtc_state)->uapi.state, &plane->base))))
#define for_each_new_intel_connector_in_state(__state, connector, new_connector_state, __i) \
for ((__i) = 0; \
(__i) < (__state)->base.num_connector; \
(__i)++) \
for_each_if ((__state)->base.connectors[__i].ptr && \
((connector) = to_intel_connector((__state)->base.connectors[__i].ptr), \
(new_connector_state) = to_intel_digital_connector_state((__state)->base.connectors[__i].new_state), 1))
void intel_link_compute_m_n(u16 bpp, int nlanes,
int pixel_clock, int link_clock,
@ -467,6 +485,7 @@ enum phy intel_port_to_phy(struct drm_i915_private *i915, enum port port);
bool is_trans_port_sync_mode(const struct intel_crtc_state *state);
void intel_plane_destroy(struct drm_plane *plane);
void intel_disable_pipe(const struct intel_crtc_state *old_crtc_state);
void i830_enable_pipe(struct drm_i915_private *dev_priv, enum pipe pipe);
void i830_disable_pipe(struct drm_i915_private *dev_priv, enum pipe pipe);
enum pipe intel_crtc_pch_transcoder(struct intel_crtc *crtc);
@ -499,9 +518,8 @@ enum tc_port intel_port_to_tc(struct drm_i915_private *dev_priv,
enum port port);
int intel_get_pipe_from_crtc_id_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv);
enum transcoder intel_pipe_to_cpu_transcoder(struct drm_i915_private *dev_priv,
enum pipe pipe);
u32 intel_crtc_get_vblank_counter(struct intel_crtc *crtc);
void intel_crtc_vblank_off(const struct intel_crtc_state *crtc_state);
int ironlake_get_lanes_required(int target_clock, int link_bw, int bpp);
void vlv_wait_port_ready(struct drm_i915_private *dev_priv,
@ -547,7 +565,6 @@ bool bxt_find_best_dpll(struct intel_crtc_state *crtc_state,
struct dpll *best_clock);
int chv_calc_dpll_params(int refclk, struct dpll *pll_clock);
bool intel_crtc_active(struct intel_crtc *crtc);
bool hsw_crtc_state_ips_capable(const struct intel_crtc_state *crtc_state);
void hsw_enable_ips(const struct intel_crtc_state *crtc_state);
void hsw_disable_ips(const struct intel_crtc_state *crtc_state);
@ -561,6 +578,8 @@ void intel_crtc_arm_fifo_underrun(struct intel_crtc *crtc,
u16 skl_scaler_calc_phase(int sub, int scale, bool chroma_center);
int skl_update_scaler_crtc(struct intel_crtc_state *crtc_state);
void skylake_scaler_disable(const struct intel_crtc_state *old_crtc_state);
void ironlake_pfit_disable(const struct intel_crtc_state *old_crtc_state);
u32 glk_plane_color_ctl(const struct intel_crtc_state *crtc_state,
const struct intel_plane_state *plane_state);
u32 glk_plane_color_ctl_crtc(const struct intel_crtc_state *crtc_state);
@ -582,6 +601,10 @@ intel_display_capture_error_state(struct drm_i915_private *dev_priv);
void intel_display_print_error_state(struct drm_i915_error_state_buf *e,
struct intel_display_error_state *error);
bool
intel_format_info_is_yuv_semiplanar(const struct drm_format_info *info,
uint64_t modifier);
/* modesetting */
void intel_modeset_init_hw(struct drm_i915_private *i915);
int intel_modeset_init(struct drm_i915_private *i915);
@ -603,9 +626,10 @@ void assert_fdi_rx_pll(struct drm_i915_private *dev_priv,
enum pipe pipe, bool state);
#define assert_fdi_rx_pll_enabled(d, p) assert_fdi_rx_pll(d, p, true)
#define assert_fdi_rx_pll_disabled(d, p) assert_fdi_rx_pll(d, p, false)
void assert_pipe(struct drm_i915_private *dev_priv, enum pipe pipe, bool state);
#define assert_pipe_enabled(d, p) assert_pipe(d, p, true)
#define assert_pipe_disabled(d, p) assert_pipe(d, p, false)
void assert_pipe(struct drm_i915_private *dev_priv,
enum transcoder cpu_transcoder, bool state);
#define assert_pipe_enabled(d, t) assert_pipe(d, t, true)
#define assert_pipe_disabled(d, t) assert_pipe(d, t, false)
/* Use I915_STATE_WARN(x) and I915_STATE_WARN_ON() (rather than WARN() and
* WARN_ON()) for hw state sanity checks to check for unexpected conditions

Просмотреть файл

@ -418,7 +418,8 @@ icl_combo_phy_aux_power_well_enable(struct drm_i915_private *dev_priv,
int pw_idx = power_well->desc->hsw.idx;
enum phy phy = ICL_AUX_PW_TO_PHY(pw_idx);
u32 val;
int wa_idx_max;
WARN_ON(!IS_ICELAKE(dev_priv));
val = I915_READ(regs->driver);
I915_WRITE(regs->driver, val | HSW_PWR_WELL_CTL_REQ(pw_idx));
@ -430,14 +431,8 @@ icl_combo_phy_aux_power_well_enable(struct drm_i915_private *dev_priv,
hsw_wait_for_power_well_enable(dev_priv, power_well);
/* Display WA #1178: icl, tgl */
if (IS_TIGERLAKE(dev_priv))
wa_idx_max = ICL_PW_CTL_IDX_AUX_C;
else
wa_idx_max = ICL_PW_CTL_IDX_AUX_B;
if (!IS_ELKHARTLAKE(dev_priv) &&
pw_idx >= ICL_PW_CTL_IDX_AUX_A && pw_idx <= wa_idx_max &&
/* Display WA #1178: icl */
if (pw_idx >= ICL_PW_CTL_IDX_AUX_A && pw_idx <= ICL_PW_CTL_IDX_AUX_B &&
!intel_bios_is_port_edp(dev_priv, (enum port)phy)) {
val = I915_READ(ICL_AUX_ANAOVRD1(pw_idx));
val |= ICL_AUX_ANAOVRD1_ENABLE | ICL_AUX_ANAOVRD1_LDO_BYPASS;
@ -454,10 +449,10 @@ icl_combo_phy_aux_power_well_disable(struct drm_i915_private *dev_priv,
enum phy phy = ICL_AUX_PW_TO_PHY(pw_idx);
u32 val;
if (INTEL_GEN(dev_priv) < 12) {
val = I915_READ(ICL_PORT_CL_DW12(phy));
I915_WRITE(ICL_PORT_CL_DW12(phy), val & ~ICL_LANE_ENABLE_AUX);
}
WARN_ON(!IS_ICELAKE(dev_priv));
val = I915_READ(ICL_PORT_CL_DW12(phy));
I915_WRITE(ICL_PORT_CL_DW12(phy), val & ~ICL_LANE_ENABLE_AUX);
val = I915_READ(regs->driver);
I915_WRITE(regs->driver, val & ~HSW_PWR_WELL_CTL_REQ(pw_idx));
@ -3688,6 +3683,151 @@ static const struct i915_power_well_desc icl_power_wells[] = {
},
};
static const struct i915_power_well_desc ehl_power_wells[] = {
{
.name = "always-on",
.always_on = true,
.domains = POWER_DOMAIN_MASK,
.ops = &i9xx_always_on_power_well_ops,
.id = DISP_PW_ID_NONE,
},
{
.name = "power well 1",
/* Handled by the DMC firmware */
.always_on = true,
.domains = 0,
.ops = &hsw_power_well_ops,
.id = SKL_DISP_PW_1,
{
.hsw.regs = &hsw_power_well_regs,
.hsw.idx = ICL_PW_CTL_IDX_PW_1,
.hsw.has_fuses = true,
},
},
{
.name = "DC off",
.domains = ICL_DISPLAY_DC_OFF_POWER_DOMAINS,
.ops = &gen9_dc_off_power_well_ops,
.id = SKL_DISP_DC_OFF,
},
{
.name = "power well 2",
.domains = ICL_PW_2_POWER_DOMAINS,
.ops = &hsw_power_well_ops,
.id = SKL_DISP_PW_2,
{
.hsw.regs = &hsw_power_well_regs,
.hsw.idx = ICL_PW_CTL_IDX_PW_2,
.hsw.has_fuses = true,
},
},
{
.name = "power well 3",
.domains = ICL_PW_3_POWER_DOMAINS,
.ops = &hsw_power_well_ops,
.id = DISP_PW_ID_NONE,
{
.hsw.regs = &hsw_power_well_regs,
.hsw.idx = ICL_PW_CTL_IDX_PW_3,
.hsw.irq_pipe_mask = BIT(PIPE_B),
.hsw.has_vga = true,
.hsw.has_fuses = true,
},
},
{
.name = "DDI A IO",
.domains = ICL_DDI_IO_A_POWER_DOMAINS,
.ops = &hsw_power_well_ops,
.id = DISP_PW_ID_NONE,
{
.hsw.regs = &icl_ddi_power_well_regs,
.hsw.idx = ICL_PW_CTL_IDX_DDI_A,
},
},
{
.name = "DDI B IO",
.domains = ICL_DDI_IO_B_POWER_DOMAINS,
.ops = &hsw_power_well_ops,
.id = DISP_PW_ID_NONE,
{
.hsw.regs = &icl_ddi_power_well_regs,
.hsw.idx = ICL_PW_CTL_IDX_DDI_B,
},
},
{
.name = "DDI C IO",
.domains = ICL_DDI_IO_C_POWER_DOMAINS,
.ops = &hsw_power_well_ops,
.id = DISP_PW_ID_NONE,
{
.hsw.regs = &icl_ddi_power_well_regs,
.hsw.idx = ICL_PW_CTL_IDX_DDI_C,
},
},
{
.name = "DDI D IO",
.domains = ICL_DDI_IO_D_POWER_DOMAINS,
.ops = &hsw_power_well_ops,
.id = DISP_PW_ID_NONE,
{
.hsw.regs = &icl_ddi_power_well_regs,
.hsw.idx = ICL_PW_CTL_IDX_DDI_D,
},
},
{
.name = "AUX A",
.domains = ICL_AUX_A_IO_POWER_DOMAINS,
.ops = &hsw_power_well_ops,
.id = DISP_PW_ID_NONE,
{
.hsw.regs = &icl_aux_power_well_regs,
.hsw.idx = ICL_PW_CTL_IDX_AUX_A,
},
},
{
.name = "AUX B",
.domains = ICL_AUX_B_IO_POWER_DOMAINS,
.ops = &hsw_power_well_ops,
.id = DISP_PW_ID_NONE,
{
.hsw.regs = &icl_aux_power_well_regs,
.hsw.idx = ICL_PW_CTL_IDX_AUX_B,
},
},
{
.name = "AUX C",
.domains = ICL_AUX_C_TC1_IO_POWER_DOMAINS,
.ops = &hsw_power_well_ops,
.id = DISP_PW_ID_NONE,
{
.hsw.regs = &icl_aux_power_well_regs,
.hsw.idx = ICL_PW_CTL_IDX_AUX_C,
},
},
{
.name = "AUX D",
.domains = ICL_AUX_D_TC2_IO_POWER_DOMAINS,
.ops = &hsw_power_well_ops,
.id = DISP_PW_ID_NONE,
{
.hsw.regs = &icl_aux_power_well_regs,
.hsw.idx = ICL_PW_CTL_IDX_AUX_D,
},
},
{
.name = "power well 4",
.domains = ICL_PW_4_POWER_DOMAINS,
.ops = &hsw_power_well_ops,
.id = DISP_PW_ID_NONE,
{
.hsw.regs = &hsw_power_well_regs,
.hsw.idx = ICL_PW_CTL_IDX_PW_4,
.hsw.has_fuses = true,
.hsw.irq_pipe_mask = BIT(PIPE_C),
},
},
};
static const struct i915_power_well_desc tgl_power_wells[] = {
{
.name = "always-on",
@ -3832,7 +3972,7 @@ static const struct i915_power_well_desc tgl_power_wells[] = {
{
.name = "AUX A",
.domains = TGL_AUX_A_IO_POWER_DOMAINS,
.ops = &icl_combo_phy_aux_power_well_ops,
.ops = &hsw_power_well_ops,
.id = DISP_PW_ID_NONE,
{
.hsw.regs = &icl_aux_power_well_regs,
@ -3842,7 +3982,7 @@ static const struct i915_power_well_desc tgl_power_wells[] = {
{
.name = "AUX B",
.domains = TGL_AUX_B_IO_POWER_DOMAINS,
.ops = &icl_combo_phy_aux_power_well_ops,
.ops = &hsw_power_well_ops,
.id = DISP_PW_ID_NONE,
{
.hsw.regs = &icl_aux_power_well_regs,
@ -3852,7 +3992,7 @@ static const struct i915_power_well_desc tgl_power_wells[] = {
{
.name = "AUX C",
.domains = TGL_AUX_C_IO_POWER_DOMAINS,
.ops = &icl_combo_phy_aux_power_well_ops,
.ops = &hsw_power_well_ops,
.id = DISP_PW_ID_NONE,
{
.hsw.regs = &icl_aux_power_well_regs,
@ -4162,6 +4302,8 @@ int intel_power_domains_init(struct drm_i915_private *dev_priv)
*/
if (IS_GEN(dev_priv, 12)) {
err = set_power_wells(power_domains, tgl_power_wells);
} else if (IS_ELKHARTLAKE(dev_priv)) {
err = set_power_wells(power_domains, ehl_power_wells);
} else if (IS_GEN(dev_priv, 11)) {
err = set_power_wells(power_domains, icl_power_wells);
} else if (IS_CANNONLAKE(dev_priv)) {
@ -4781,6 +4923,56 @@ static void cnl_display_core_uninit(struct drm_i915_private *dev_priv)
intel_combo_phy_uninit(dev_priv);
}
struct buddy_page_mask {
u32 page_mask;
u8 type;
u8 num_channels;
};
static const struct buddy_page_mask tgl_buddy_page_masks[] = {
{ .num_channels = 1, .type = INTEL_DRAM_LPDDR4, .page_mask = 0xE },
{ .num_channels = 1, .type = INTEL_DRAM_DDR4, .page_mask = 0xF },
{ .num_channels = 2, .type = INTEL_DRAM_LPDDR4, .page_mask = 0x1C },
{ .num_channels = 2, .type = INTEL_DRAM_DDR4, .page_mask = 0x1F },
{}
};
static const struct buddy_page_mask wa_1409767108_buddy_page_masks[] = {
{ .num_channels = 1, .type = INTEL_DRAM_LPDDR4, .page_mask = 0x1 },
{ .num_channels = 1, .type = INTEL_DRAM_DDR4, .page_mask = 0x1 },
{ .num_channels = 2, .type = INTEL_DRAM_LPDDR4, .page_mask = 0x3 },
{ .num_channels = 2, .type = INTEL_DRAM_DDR4, .page_mask = 0x3 },
{}
};
static void tgl_bw_buddy_init(struct drm_i915_private *dev_priv)
{
enum intel_dram_type type = dev_priv->dram_info.type;
u8 num_channels = dev_priv->dram_info.num_channels;
const struct buddy_page_mask *table;
int i;
if (IS_TGL_REVID(dev_priv, TGL_REVID_A0, TGL_REVID_A0))
/* Wa_1409767108: tgl */
table = wa_1409767108_buddy_page_masks;
else
table = tgl_buddy_page_masks;
for (i = 0; table[i].page_mask != 0; i++)
if (table[i].num_channels == num_channels &&
table[i].type == type)
break;
if (table[i].page_mask == 0) {
DRM_DEBUG_DRIVER("Unknown memory configuration; disabling address buddy logic.\n");
I915_WRITE(BW_BUDDY1_CTL, BW_BUDDY_DISABLE);
I915_WRITE(BW_BUDDY2_CTL, BW_BUDDY_DISABLE);
} else {
I915_WRITE(BW_BUDDY1_PAGE_MASK, table[i].page_mask);
I915_WRITE(BW_BUDDY2_PAGE_MASK, table[i].page_mask);
}
}
static void icl_display_core_init(struct drm_i915_private *dev_priv,
bool resume)
{
@ -4813,6 +5005,10 @@ static void icl_display_core_init(struct drm_i915_private *dev_priv,
/* 6. Setup MBUS. */
icl_mbus_init(dev_priv);
/* 7. Program arbiter BW_BUDDY registers */
if (INTEL_GEN(dev_priv) >= 12)
tgl_bw_buddy_init(dev_priv);
if (resume && dev_priv->csr.dmc_payload)
intel_csr_load_program(dev_priv);
}

Просмотреть файл

@ -28,7 +28,7 @@ enum intel_display_power_domain {
POWER_DOMAIN_TRANSCODER_C,
POWER_DOMAIN_TRANSCODER_D,
POWER_DOMAIN_TRANSCODER_EDP,
/* VDSC/joining for TRANSCODER_EDP (ICL) or TRANSCODER_A (TGL) */
/* VDSC/joining for eDP/DSI transcoder (ICL) or pipe A (TGL) */
POWER_DOMAIN_TRANSCODER_VDSC_PW2,
POWER_DOMAIN_TRANSCODER_DSI_A,
POWER_DOMAIN_TRANSCODER_DSI_C,

Просмотреть файл

@ -523,7 +523,24 @@ struct intel_atomic_state {
};
struct intel_plane_state {
struct drm_plane_state base;
struct drm_plane_state uapi;
/*
* actual hardware state, the state we program to the hardware.
* The following members are used to verify the hardware state:
* During initial hw readout, they need to be copied from uapi.
*/
struct {
struct drm_crtc *crtc;
struct drm_framebuffer *fb;
u16 alpha;
uint16_t pixel_blend_mode;
unsigned int rotation;
enum drm_color_encoding color_encoding;
enum drm_color_range color_range;
} hw;
struct i915_ggtt_view view;
struct i915_vma *vma;
unsigned long flags;
@ -546,6 +563,9 @@ struct intel_plane_state {
/* plane color control register */
u32 color_ctl;
/* chroma upsampler control register */
u32 cus_ctl;
/*
* scaler_id
* = -1 : not using a scaler
@ -757,7 +777,33 @@ enum intel_output_format {
};
struct intel_crtc_state {
struct drm_crtc_state base;
/*
* uapi (drm) state. This is the software state shown to userspace.
* In particular, the following members are used for bookkeeping:
* - crtc
* - state
* - *_changed
* - event
* - commit
* - mode_blob
*/
struct drm_crtc_state uapi;
/*
* actual hardware state, the state we program to the hardware.
* The following members are used to verify the hardware state:
* - enable
* - active
* - mode / adjusted_mode
* - color property blobs.
*
* During initial hw readout, they need to be copied to uapi.
*/
struct {
bool active, enable;
struct drm_property_blob *degamma_lut, *gamma_lut, *ctm;
struct drm_display_mode mode, adjusted_mode;
} hw;
/**
* quirks - bitfield with hw state readout quirks
@ -1080,9 +1126,6 @@ struct intel_plane {
void (*update_plane)(struct intel_plane *plane,
const struct intel_crtc_state *crtc_state,
const struct intel_plane_state *plane_state);
void (*update_slave)(struct intel_plane *plane,
const struct intel_crtc_state *crtc_state,
const struct intel_plane_state *plane_state);
void (*disable_plane)(struct intel_plane *plane,
const struct intel_crtc_state *crtc_state);
bool (*get_hw_state)(struct intel_plane *plane, enum pipe *pipe);
@ -1113,12 +1156,12 @@ struct cxsr_latency {
#define to_intel_atomic_state(x) container_of(x, struct intel_atomic_state, base)
#define to_intel_crtc(x) container_of(x, struct intel_crtc, base)
#define to_intel_crtc_state(x) container_of(x, struct intel_crtc_state, base)
#define to_intel_crtc_state(x) container_of(x, struct intel_crtc_state, uapi)
#define to_intel_connector(x) container_of(x, struct intel_connector, base)
#define to_intel_encoder(x) container_of(x, struct intel_encoder, base)
#define to_intel_framebuffer(x) container_of(x, struct intel_framebuffer, base)
#define to_intel_plane(x) container_of(x, struct intel_plane, base)
#define to_intel_plane_state(x) container_of(x, struct intel_plane_state, base)
#define to_intel_plane_state(x) container_of(x, struct intel_plane_state, uapi)
#define intel_fb_obj(x) ((x) ? to_intel_bo((x)->obj[0]) : NULL)
struct intel_hdmi {
@ -1528,6 +1571,24 @@ intel_atomic_get_new_crtc_state(struct intel_atomic_state *state,
&crtc->base));
}
static inline struct intel_digital_connector_state *
intel_atomic_get_new_connector_state(struct intel_atomic_state *state,
struct intel_connector *connector)
{
return to_intel_digital_connector_state(
drm_atomic_get_new_connector_state(&state->base,
&connector->base));
}
static inline struct intel_digital_connector_state *
intel_atomic_get_old_connector_state(struct intel_atomic_state *state,
struct intel_connector *connector)
{
return to_intel_digital_connector_state(
drm_atomic_get_old_connector_state(&state->base,
&connector->base));
}
/* intel_display.c */
static inline bool
intel_crtc_has_type(const struct intel_crtc_state *crtc_state,

Просмотреть файл

@ -1889,32 +1889,15 @@ static bool intel_dp_supports_fec(struct intel_dp *intel_dp,
drm_dp_sink_supports_fec(intel_dp->fec_capable);
}
static bool intel_dp_source_supports_dsc(struct intel_dp *intel_dp,
const struct intel_crtc_state *pipe_config)
{
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
if (!INTEL_INFO(dev_priv)->display.has_dsc)
return false;
/* On TGL, DSC is supported on all Pipes */
if (INTEL_GEN(dev_priv) >= 12)
return true;
if (INTEL_GEN(dev_priv) >= 10 &&
pipe_config->cpu_transcoder != TRANSCODER_A)
return true;
return false;
}
static bool intel_dp_supports_dsc(struct intel_dp *intel_dp,
const struct intel_crtc_state *pipe_config)
const struct intel_crtc_state *crtc_state)
{
if (!intel_dp_is_edp(intel_dp) && !pipe_config->fec_enable)
struct intel_encoder *encoder = &dp_to_dig_port(intel_dp)->base;
if (!intel_dp_is_edp(intel_dp) && !crtc_state->fec_enable)
return false;
return intel_dp_source_supports_dsc(intel_dp, pipe_config) &&
return intel_dsc_source_support(encoder, crtc_state) &&
drm_dp_sink_supports_dsc(intel_dp->dsc_dpcd);
}
@ -1999,7 +1982,7 @@ intel_dp_compute_link_config_wide(struct intel_dp *intel_dp,
struct intel_crtc_state *pipe_config,
const struct link_config_limits *limits)
{
struct drm_display_mode *adjusted_mode = &pipe_config->base.adjusted_mode;
struct drm_display_mode *adjusted_mode = &pipe_config->hw.adjusted_mode;
int bpp, clock, lane_count;
int mode_rate, link_clock, link_avail;
@ -2046,6 +2029,63 @@ static int intel_dp_dsc_compute_bpp(struct intel_dp *intel_dp, u8 dsc_max_bpc)
return 0;
}
#define DSC_SUPPORTED_VERSION_MIN 1
static int intel_dp_dsc_compute_params(struct intel_encoder *encoder,
struct intel_crtc_state *crtc_state)
{
struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base);
struct drm_dsc_config *vdsc_cfg = &crtc_state->dsc.config;
u8 line_buf_depth;
int ret;
ret = intel_dsc_compute_params(encoder, crtc_state);
if (ret)
return ret;
/*
* Slice Height of 8 works for all currently available panels. So start
* with that if pic_height is an integral multiple of 8. Eventually add
* logic to try multiple slice heights.
*/
if (vdsc_cfg->pic_height % 8 == 0)
vdsc_cfg->slice_height = 8;
else if (vdsc_cfg->pic_height % 4 == 0)
vdsc_cfg->slice_height = 4;
else
vdsc_cfg->slice_height = 2;
vdsc_cfg->dsc_version_major =
(intel_dp->dsc_dpcd[DP_DSC_REV - DP_DSC_SUPPORT] &
DP_DSC_MAJOR_MASK) >> DP_DSC_MAJOR_SHIFT;
vdsc_cfg->dsc_version_minor =
min(DSC_SUPPORTED_VERSION_MIN,
(intel_dp->dsc_dpcd[DP_DSC_REV - DP_DSC_SUPPORT] &
DP_DSC_MINOR_MASK) >> DP_DSC_MINOR_SHIFT);
vdsc_cfg->convert_rgb = intel_dp->dsc_dpcd[DP_DSC_DEC_COLOR_FORMAT_CAP - DP_DSC_SUPPORT] &
DP_DSC_RGB;
line_buf_depth = drm_dp_dsc_sink_line_buf_depth(intel_dp->dsc_dpcd);
if (!line_buf_depth) {
DRM_DEBUG_KMS("DSC Sink Line Buffer Depth invalid\n");
return -EINVAL;
}
if (vdsc_cfg->dsc_version_minor == 2)
vdsc_cfg->line_buf_depth = (line_buf_depth == DSC_1_2_MAX_LINEBUF_DEPTH_BITS) ?
DSC_1_2_MAX_LINEBUF_DEPTH_VAL : line_buf_depth;
else
vdsc_cfg->line_buf_depth = (line_buf_depth > DSC_1_1_MAX_LINEBUF_DEPTH_BITS) ?
DSC_1_1_MAX_LINEBUF_DEPTH_BITS : line_buf_depth;
vdsc_cfg->block_pred_enable =
intel_dp->dsc_dpcd[DP_DSC_BLK_PREDICTION_SUPPORT - DP_DSC_SUPPORT] &
DP_DSC_BLK_PREDICTION_IS_SUPPORTED;
return drm_dsc_compute_rc_parameters(vdsc_cfg);
}
static int intel_dp_dsc_compute_config(struct intel_dp *intel_dp,
struct intel_crtc_state *pipe_config,
struct drm_connector_state *conn_state,
@ -2053,7 +2093,7 @@ static int intel_dp_dsc_compute_config(struct intel_dp *intel_dp,
{
struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
struct drm_i915_private *dev_priv = to_i915(dig_port->base.base.dev);
struct drm_display_mode *adjusted_mode = &pipe_config->base.adjusted_mode;
struct drm_display_mode *adjusted_mode = &pipe_config->hw.adjusted_mode;
u8 dsc_max_bpc;
int pipe_bpp;
int ret;
@ -2132,7 +2172,7 @@ static int intel_dp_dsc_compute_config(struct intel_dp *intel_dp,
}
}
ret = intel_dp_compute_dsc_params(intel_dp, pipe_config);
ret = intel_dp_dsc_compute_params(&dig_port->base, pipe_config);
if (ret < 0) {
DRM_DEBUG_KMS("Cannot compute valid DSC parameters for Input Bpp = %d "
"Compressed BPP = %d\n",
@ -2164,7 +2204,7 @@ intel_dp_compute_link_config(struct intel_encoder *encoder,
struct intel_crtc_state *pipe_config,
struct drm_connector_state *conn_state)
{
struct drm_display_mode *adjusted_mode = &pipe_config->base.adjusted_mode;
struct drm_display_mode *adjusted_mode = &pipe_config->hw.adjusted_mode;
struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base);
struct link_config_limits limits;
int common_len;
@ -2252,8 +2292,8 @@ intel_dp_ycbcr420_config(struct intel_dp *intel_dp,
{
const struct drm_display_info *info = &connector->display_info;
const struct drm_display_mode *adjusted_mode =
&crtc_state->base.adjusted_mode;
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
&crtc_state->hw.adjusted_mode;
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
int ret;
if (!drm_mode_is_420_only(info, adjusted_mode) ||
@ -2281,7 +2321,7 @@ bool intel_dp_limited_color_range(const struct intel_crtc_state *crtc_state,
const struct intel_digital_connector_state *intel_conn_state =
to_intel_digital_connector_state(conn_state);
const struct drm_display_mode *adjusted_mode =
&crtc_state->base.adjusted_mode;
&crtc_state->hw.adjusted_mode;
/*
* Our YCbCr output is always limited range.
@ -2308,17 +2348,28 @@ bool intel_dp_limited_color_range(const struct intel_crtc_state *crtc_state,
}
}
static bool intel_dp_port_has_audio(struct drm_i915_private *dev_priv,
enum port port)
{
if (IS_G4X(dev_priv))
return false;
if (INTEL_GEN(dev_priv) < 12 && port == PORT_A)
return false;
return true;
}
int
intel_dp_compute_config(struct intel_encoder *encoder,
struct intel_crtc_state *pipe_config,
struct drm_connector_state *conn_state)
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct drm_display_mode *adjusted_mode = &pipe_config->base.adjusted_mode;
struct drm_display_mode *adjusted_mode = &pipe_config->hw.adjusted_mode;
struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base);
struct intel_lspcon *lspcon = enc_to_intel_lspcon(&encoder->base);
enum port port = encoder->port;
struct intel_crtc *intel_crtc = to_intel_crtc(pipe_config->base.crtc);
struct intel_crtc *intel_crtc = to_intel_crtc(pipe_config->uapi.crtc);
struct intel_connector *intel_connector = intel_dp->attached_connector;
struct intel_digital_connector_state *intel_conn_state =
to_intel_digital_connector_state(conn_state);
@ -2341,7 +2392,7 @@ intel_dp_compute_config(struct intel_encoder *encoder,
return ret;
pipe_config->has_drrs = false;
if (IS_G4X(dev_priv) || port == PORT_A)
if (!intel_dp_port_has_audio(dev_priv, port))
pipe_config->has_audio = false;
else if (intel_conn_state->force_audio == HDMI_AUDIO_AUTO)
pipe_config->has_audio = intel_dp->has_audio;
@ -2433,8 +2484,8 @@ static void intel_dp_prepare(struct intel_encoder *encoder,
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base);
enum port port = encoder->port;
struct intel_crtc *crtc = to_intel_crtc(pipe_config->base.crtc);
const struct drm_display_mode *adjusted_mode = &pipe_config->base.adjusted_mode;
struct intel_crtc *crtc = to_intel_crtc(pipe_config->uapi.crtc);
const struct drm_display_mode *adjusted_mode = &pipe_config->hw.adjusted_mode;
intel_dp_set_link_params(intel_dp, pipe_config->port_clock,
pipe_config->lane_count,
@ -3031,7 +3082,7 @@ static void assert_edp_pll(struct drm_i915_private *dev_priv, bool state)
static void ironlake_edp_pll_on(struct intel_dp *intel_dp,
const struct intel_crtc_state *pipe_config)
{
struct intel_crtc *crtc = to_intel_crtc(pipe_config->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(pipe_config->uapi.crtc);
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
assert_pipe_disabled(dev_priv, crtc->pipe);
@ -3071,7 +3122,7 @@ static void ironlake_edp_pll_on(struct intel_dp *intel_dp,
static void ironlake_edp_pll_off(struct intel_dp *intel_dp,
const struct intel_crtc_state *old_crtc_state)
{
struct intel_crtc *crtc = to_intel_crtc(old_crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(old_crtc_state->uapi.crtc);
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
assert_pipe_disabled(dev_priv, crtc->pipe);
@ -3231,7 +3282,7 @@ static void intel_dp_get_config(struct intel_encoder *encoder,
struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base);
u32 tmp, flags = 0;
enum port port = encoder->port;
struct intel_crtc *crtc = to_intel_crtc(pipe_config->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(pipe_config->uapi.crtc);
if (encoder->type == INTEL_OUTPUT_EDP)
pipe_config->output_types |= BIT(INTEL_OUTPUT_EDP);
@ -3266,7 +3317,7 @@ static void intel_dp_get_config(struct intel_encoder *encoder,
flags |= DRM_MODE_FLAG_NVSYNC;
}
pipe_config->base.adjusted_mode.flags |= flags;
pipe_config->hw.adjusted_mode.flags |= flags;
if (IS_G4X(dev_priv) && tmp & DP_COLOR_RANGE_16_235)
pipe_config->limited_color_range = true;
@ -3283,7 +3334,7 @@ static void intel_dp_get_config(struct intel_encoder *encoder,
pipe_config->port_clock = 270000;
}
pipe_config->base.adjusted_mode.crtc_clock =
pipe_config->hw.adjusted_mode.crtc_clock =
intel_dotclock_calculate(pipe_config->port_clock,
&pipe_config->dp_m_n);
@ -3498,7 +3549,7 @@ static void intel_enable_dp(struct intel_encoder *encoder,
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base);
struct intel_crtc *crtc = to_intel_crtc(pipe_config->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(pipe_config->uapi.crtc);
u32 dp_reg = I915_READ(intel_dp->output_reg);
enum pipe pipe = crtc->pipe;
intel_wakeref_t wakeref;
@ -3631,7 +3682,7 @@ static void vlv_init_panel_power_sequencer(struct intel_encoder *encoder,
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
lockdep_assert_held(&dev_priv->pps_mutex);
@ -4153,7 +4204,7 @@ intel_dp_link_down(struct intel_encoder *encoder,
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base);
struct intel_crtc *crtc = to_intel_crtc(old_crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(old_crtc_state->uapi.crtc);
enum port port = encoder->port;
u32 DP = intel_dp->DP;
@ -5076,7 +5127,7 @@ int intel_dp_retrain_link(struct intel_encoder *encoder,
WARN_ON(!intel_crtc_has_dp_encoder(crtc_state));
if (!crtc_state->base.active)
if (!crtc_state->hw.active)
return 0;
if (conn_state->commit &&
@ -5482,7 +5533,7 @@ static bool intel_combo_phy_connected(struct drm_i915_private *dev_priv,
return I915_READ(SDEISR) & SDE_DDI_HOTPLUG_ICP(phy);
}
static bool icl_digital_port_connected(struct intel_encoder *encoder)
static bool icp_digital_port_connected(struct intel_encoder *encoder)
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_digital_port *dig_port = enc_to_dig_port(&encoder->base);
@ -5520,9 +5571,9 @@ static bool __intel_digital_port_connected(struct intel_encoder *encoder)
return g4x_digital_port_connected(encoder);
}
if (INTEL_GEN(dev_priv) >= 11)
return icl_digital_port_connected(encoder);
else if (IS_GEN(dev_priv, 10) || IS_GEN9_BC(dev_priv))
if (INTEL_PCH_TYPE(dev_priv) >= PCH_ICP)
return icp_digital_port_connected(encoder);
else if (INTEL_PCH_TYPE(dev_priv) >= PCH_SPT)
return spt_digital_port_connected(encoder);
else if (IS_GEN9_LP(dev_priv))
return bxt_digital_port_connected(encoder);
@ -6909,7 +6960,7 @@ static void intel_dp_set_drrs_state(struct drm_i915_private *dev_priv,
int refresh_rate)
{
struct intel_dp *intel_dp = dev_priv->drrs.dp;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *intel_crtc = to_intel_crtc(crtc_state->uapi.crtc);
enum drrs_refresh_rate_type index = DRRS_HIGH_RR;
if (refresh_rate <= 0) {
@ -6942,7 +6993,7 @@ static void intel_dp_set_drrs_state(struct drm_i915_private *dev_priv,
return;
}
if (!crtc_state->base.active) {
if (!crtc_state->hw.active) {
DRM_DEBUG_KMS("eDP encoder disabled. CRTC not Active\n");
return;
}

Просмотреть файл

@ -42,13 +42,13 @@ static int intel_dp_mst_compute_link_config(struct intel_encoder *encoder,
struct drm_connector_state *conn_state,
struct link_config_limits *limits)
{
struct drm_atomic_state *state = crtc_state->base.state;
struct drm_atomic_state *state = crtc_state->uapi.state;
struct intel_dp_mst_encoder *intel_mst = enc_to_mst(&encoder->base);
struct intel_dp *intel_dp = &intel_mst->primary->dp;
struct intel_connector *connector =
to_intel_connector(conn_state->connector);
const struct drm_display_mode *adjusted_mode =
&crtc_state->base.adjusted_mode;
&crtc_state->hw.adjusted_mode;
void *port = connector->port;
bool constant_n = drm_dp_has_quirk(&intel_dp->desc,
DP_DPCD_QUIRK_CONSTANT_N);
@ -99,7 +99,7 @@ static int intel_dp_mst_compute_config(struct intel_encoder *encoder,
struct intel_digital_connector_state *intel_conn_state =
to_intel_digital_connector_state(conn_state);
const struct drm_display_mode *adjusted_mode =
&pipe_config->base.adjusted_mode;
&pipe_config->hw.adjusted_mode;
void *port = connector->port;
struct link_config_limits limits;
int ret;
@ -168,7 +168,6 @@ intel_dp_mst_atomic_check(struct drm_connector *connector,
struct intel_connector *intel_connector =
to_intel_connector(connector);
struct drm_crtc *new_crtc = new_conn_state->crtc;
struct drm_crtc_state *crtc_state;
struct drm_dp_mst_topology_mgr *mgr;
int ret;
@ -183,11 +182,16 @@ intel_dp_mst_atomic_check(struct drm_connector *connector,
* connector
*/
if (new_crtc) {
crtc_state = drm_atomic_get_new_crtc_state(state, new_crtc);
struct intel_atomic_state *intel_state =
to_intel_atomic_state(state);
struct intel_crtc *intel_crtc = to_intel_crtc(new_crtc);
struct intel_crtc_state *crtc_state =
intel_atomic_get_new_crtc_state(intel_state,
intel_crtc);
if (!crtc_state ||
!drm_atomic_crtc_needs_modeset(crtc_state) ||
crtc_state->enable)
!drm_atomic_crtc_needs_modeset(&crtc_state->uapi) ||
crtc_state->uapi.enable)
return 0;
}
@ -231,8 +235,32 @@ static void intel_mst_post_disable_dp(struct intel_encoder *encoder,
struct intel_dp *intel_dp = &intel_dig_port->dp;
struct intel_connector *connector =
to_intel_connector(old_conn_state->connector);
struct drm_i915_private *dev_priv = to_i915(connector->base.dev);
bool last_mst_stream;
intel_ddi_disable_pipe_clock(old_crtc_state);
intel_dp->active_mst_links--;
last_mst_stream = intel_dp->active_mst_links == 0;
intel_crtc_vblank_off(old_crtc_state);
intel_disable_pipe(old_crtc_state);
intel_ddi_disable_transcoder_func(old_crtc_state);
if (INTEL_GEN(dev_priv) >= 9)
skylake_scaler_disable(old_crtc_state);
else
ironlake_pfit_disable(old_crtc_state);
/*
* From TGL spec: "If multi-stream slave transcoder: Configure
* Transcoder Clock Select to direct no clock to the transcoder"
*
* From older GENs spec: "Configure Transcoder Clock Select to direct
* no clock to the transcoder"
*/
if (INTEL_GEN(dev_priv) < 12 || !last_mst_stream)
intel_ddi_disable_pipe_clock(old_crtc_state);
/* this can fail */
drm_dp_check_act_status(&intel_dp->mst_mgr);
@ -248,14 +276,10 @@ static void intel_mst_post_disable_dp(struct intel_encoder *encoder,
drm_dp_send_power_updown_phy(&intel_dp->mst_mgr, connector->port,
false);
intel_dp->active_mst_links--;
intel_mst->connector = NULL;
if (intel_dp->active_mst_links == 0) {
intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_OFF);
if (last_mst_stream)
intel_dig_port->base.post_disable(&intel_dig_port->base,
old_crtc_state, NULL);
}
DRM_DEBUG_KMS("active links %d\n", intel_dp->active_mst_links);
}
@ -273,20 +297,6 @@ static void intel_mst_pre_pll_enable_dp(struct intel_encoder *encoder,
pipe_config, NULL);
}
static void intel_mst_post_pll_disable_dp(struct intel_encoder *encoder,
const struct intel_crtc_state *old_crtc_state,
const struct drm_connector_state *old_conn_state)
{
struct intel_dp_mst_encoder *intel_mst = enc_to_mst(&encoder->base);
struct intel_digital_port *intel_dig_port = intel_mst->primary;
struct intel_dp *intel_dp = &intel_dig_port->dp;
if (intel_dp->active_mst_links == 0)
intel_dig_port->base.post_pll_disable(&intel_dig_port->base,
old_crtc_state,
old_conn_state);
}
static void intel_mst_pre_enable_dp(struct intel_encoder *encoder,
const struct intel_crtc_state *pipe_config,
const struct drm_connector_state *conn_state)
@ -299,21 +309,23 @@ static void intel_mst_pre_enable_dp(struct intel_encoder *encoder,
to_intel_connector(conn_state->connector);
int ret;
u32 temp;
bool first_mst_stream;
/* MST encoders are bound to a crtc, not to a connector,
* force the mapping here for get_hw_state.
*/
connector->encoder = encoder;
intel_mst->connector = connector;
first_mst_stream = intel_dp->active_mst_links == 0;
DRM_DEBUG_KMS("active links %d\n", intel_dp->active_mst_links);
if (intel_dp->active_mst_links == 0)
if (first_mst_stream)
intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_ON);
drm_dp_send_power_updown_phy(&intel_dp->mst_mgr, connector->port, true);
if (intel_dp->active_mst_links == 0)
if (first_mst_stream)
intel_dig_port->base.pre_enable(&intel_dig_port->base,
pipe_config, NULL);
@ -330,7 +342,15 @@ static void intel_mst_pre_enable_dp(struct intel_encoder *encoder,
ret = drm_dp_update_payload_part1(&intel_dp->mst_mgr);
intel_ddi_enable_pipe_clock(pipe_config);
/*
* Before Gen 12 this is not done as part of
* intel_dig_port->base.pre_enable() and should be done here. For
* Gen 12+ the step in which this should be done is different for the
* first MST stream, so it's done on the DDI for the first stream and
* here for the following ones.
*/
if (INTEL_GEN(dev_priv) < 12 || !first_mst_stream)
intel_ddi_enable_pipe_clock(pipe_config);
intel_ddi_set_dp_msa(pipe_config, conn_state);
}
@ -633,7 +653,6 @@ intel_dp_create_fake_mst_encoder(struct intel_digital_port *intel_dig_port, enum
intel_encoder->disable = intel_mst_disable_dp;
intel_encoder->post_disable = intel_mst_post_disable_dp;
intel_encoder->pre_pll_enable = intel_mst_pre_pll_enable_dp;
intel_encoder->post_pll_disable = intel_mst_post_pll_disable_dp;
intel_encoder->pre_enable = intel_mst_pre_enable_dp;
intel_encoder->enable = intel_mst_enable_dp;
intel_encoder->get_hw_state = intel_dp_mst_enc_get_hw_state;

Просмотреть файл

@ -739,7 +739,7 @@ void chv_data_lane_soft_reset(struct intel_encoder *encoder,
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
enum dpio_channel ch = vlv_dport_to_channel(enc_to_dig_port(&encoder->base));
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
enum pipe pipe = crtc->pipe;
u32 val;
@ -783,7 +783,7 @@ void chv_phy_pre_pll_enable(struct intel_encoder *encoder,
{
struct intel_digital_port *dport = enc_to_dig_port(&encoder->base);
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
enum dpio_channel ch = vlv_dport_to_channel(dport);
enum pipe pipe = crtc->pipe;
unsigned int lane_mask =
@ -864,7 +864,7 @@ void chv_phy_pre_encoder_enable(struct intel_encoder *encoder,
struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base);
struct intel_digital_port *dport = dp_to_dig_port(intel_dp);
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
enum dpio_channel ch = vlv_dport_to_channel(dport);
enum pipe pipe = crtc->pipe;
int data, i, stagger;
@ -953,7 +953,7 @@ void chv_phy_post_pll_disable(struct intel_encoder *encoder,
const struct intel_crtc_state *old_crtc_state)
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
enum pipe pipe = to_intel_crtc(old_crtc_state->base.crtc)->pipe;
enum pipe pipe = to_intel_crtc(old_crtc_state->uapi.crtc)->pipe;
u32 val;
vlv_dpio_get(dev_priv);
@ -1016,7 +1016,7 @@ void vlv_phy_pre_pll_enable(struct intel_encoder *encoder,
{
struct intel_digital_port *dport = enc_to_dig_port(&encoder->base);
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
enum dpio_channel port = vlv_dport_to_channel(dport);
enum pipe pipe = crtc->pipe;
@ -1046,7 +1046,7 @@ void vlv_phy_pre_encoder_enable(struct intel_encoder *encoder,
struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base);
struct intel_digital_port *dport = dp_to_dig_port(intel_dp);
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
enum dpio_channel port = vlv_dport_to_channel(dport);
enum pipe pipe = crtc->pipe;
u32 val;
@ -1075,7 +1075,7 @@ void vlv_phy_reset_lanes(struct intel_encoder *encoder,
{
struct intel_digital_port *dport = enc_to_dig_port(&encoder->base);
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_crtc *crtc = to_intel_crtc(old_crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(old_crtc_state->uapi.crtc);
enum dpio_channel port = vlv_dport_to_channel(dport);
enum pipe pipe = crtc->pipe;

Просмотреть файл

@ -136,7 +136,7 @@ void assert_shared_dpll(struct drm_i915_private *dev_priv,
*/
void intel_prepare_shared_dpll(const struct intel_crtc_state *crtc_state)
{
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
struct intel_shared_dpll *pll = crtc_state->shared_dpll;
@ -163,7 +163,7 @@ void intel_prepare_shared_dpll(const struct intel_crtc_state *crtc_state)
*/
void intel_enable_shared_dpll(const struct intel_crtc_state *crtc_state)
{
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
struct intel_shared_dpll *pll = crtc_state->shared_dpll;
unsigned int crtc_mask = drm_crtc_mask(&crtc->base);
@ -208,7 +208,7 @@ out:
*/
void intel_disable_shared_dpll(const struct intel_crtc_state *crtc_state)
{
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
struct intel_shared_dpll *pll = crtc_state->shared_dpll;
unsigned int crtc_mask = drm_crtc_mask(&crtc->base);
@ -842,7 +842,7 @@ hsw_ddi_hdmi_get_dpll(struct intel_atomic_state *state,
static struct intel_shared_dpll *
hsw_ddi_dp_get_dpll(struct intel_crtc_state *crtc_state)
{
struct drm_i915_private *dev_priv = to_i915(crtc_state->base.crtc->dev);
struct drm_i915_private *dev_priv = to_i915(crtc_state->uapi.crtc->dev);
struct intel_shared_dpll *pll;
enum intel_dpll_id pll_id;
int clock = crtc_state->port_clock;
@ -1751,7 +1751,7 @@ static bool
bxt_ddi_hdmi_pll_dividers(struct intel_crtc_state *crtc_state,
struct bxt_clk_div *clk_div)
{
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
struct dpll best_clock;
/* Calculate HDMI div */
@ -2274,7 +2274,7 @@ static bool
cnl_ddi_calculate_wrpll(struct intel_crtc_state *crtc_state,
struct skl_wrpll_params *wrpll_params)
{
struct drm_i915_private *dev_priv = to_i915(crtc_state->base.crtc->dev);
struct drm_i915_private *dev_priv = to_i915(crtc_state->uapi.crtc->dev);
u32 afe_clock = crtc_state->port_clock * 5;
u32 ref_clock;
u32 dco_min = 7998000;
@ -2553,7 +2553,7 @@ static const struct skl_wrpll_params tgl_tbt_pll_24MHz_values = {
static bool icl_calc_dp_combo_pll(struct intel_crtc_state *crtc_state,
struct skl_wrpll_params *pll_params)
{
struct drm_i915_private *dev_priv = to_i915(crtc_state->base.crtc->dev);
struct drm_i915_private *dev_priv = to_i915(crtc_state->uapi.crtc->dev);
const struct icl_combo_pll_params *params =
dev_priv->cdclk.hw.ref == 24000 ?
icl_dp_combo_pll_24MHz_values :
@ -2575,7 +2575,7 @@ static bool icl_calc_dp_combo_pll(struct intel_crtc_state *crtc_state,
static bool icl_calc_tbt_pll(struct intel_crtc_state *crtc_state,
struct skl_wrpll_params *pll_params)
{
struct drm_i915_private *dev_priv = to_i915(crtc_state->base.crtc->dev);
struct drm_i915_private *dev_priv = to_i915(crtc_state->uapi.crtc->dev);
if (INTEL_GEN(dev_priv) >= 12) {
switch (dev_priv->cdclk.hw.ref) {
@ -2612,7 +2612,7 @@ static bool icl_calc_dpll_state(struct intel_crtc_state *crtc_state,
struct intel_encoder *encoder,
struct intel_dpll_hw_state *pll_state)
{
struct drm_i915_private *dev_priv = to_i915(crtc_state->base.crtc->dev);
struct drm_i915_private *dev_priv = to_i915(crtc_state->uapi.crtc->dev);
u32 cfgcr0, cfgcr1;
struct skl_wrpll_params pll_params = { 0 };
bool ret;
@ -2744,7 +2744,7 @@ static bool icl_mg_pll_find_divisors(int clock_khz, bool is_dp, bool use_ssc,
static bool icl_calc_mg_pll_state(struct intel_crtc_state *crtc_state,
struct intel_dpll_hw_state *pll_state)
{
struct drm_i915_private *dev_priv = to_i915(crtc_state->base.crtc->dev);
struct drm_i915_private *dev_priv = to_i915(crtc_state->uapi.crtc->dev);
int refclk_khz = dev_priv->cdclk.hw.ref;
int clock = crtc_state->port_clock;
u32 dco_khz, m1div, m2div_int, m2div_rem, m2div_frac;

Просмотреть файл

@ -102,43 +102,50 @@ intel_dsb_get(struct intel_crtc *crtc)
struct intel_dsb *dsb = &crtc->dsb;
struct drm_i915_gem_object *obj;
struct i915_vma *vma;
u32 *buf;
intel_wakeref_t wakeref;
if (!HAS_DSB(i915))
return dsb;
if (atomic_add_return(1, &dsb->refcount) != 1)
if (dsb->refcount++ != 0)
return dsb;
dsb->id = DSB1;
wakeref = intel_runtime_pm_get(&i915->runtime_pm);
obj = i915_gem_object_create_internal(i915, DSB_BUF_SIZE);
if (IS_ERR(obj)) {
DRM_ERROR("Gem object creation failed\n");
goto err;
goto out;
}
vma = i915_gem_object_ggtt_pin(obj, NULL, 0, 0, PIN_MAPPABLE);
vma = i915_gem_object_ggtt_pin(obj, NULL, 0, 0, 0);
if (IS_ERR(vma)) {
DRM_ERROR("Vma creation failed\n");
i915_gem_object_put(obj);
atomic_dec(&dsb->refcount);
goto err;
goto out;
}
dsb->cmd_buf = i915_gem_object_pin_map(vma->obj, I915_MAP_WC);
if (IS_ERR(dsb->cmd_buf)) {
buf = i915_gem_object_pin_map(vma->obj, I915_MAP_WC);
if (IS_ERR(buf)) {
DRM_ERROR("Command buffer creation failed\n");
i915_vma_unpin_and_release(&vma, 0);
dsb->cmd_buf = NULL;
atomic_dec(&dsb->refcount);
goto err;
goto out;
}
dsb->vma = vma;
err:
dsb->id = DSB1;
dsb->vma = vma;
dsb->cmd_buf = buf;
out:
/*
* On error dsb->cmd_buf will continue to be NULL, making the writes
* pass-through. Leave the dangling ref to be removed later by the
* corresponding intel_dsb_put(): the important error message will
* already be logged above.
*/
intel_runtime_pm_put(&i915->runtime_pm, wakeref);
return dsb;
}
@ -158,10 +165,10 @@ void intel_dsb_put(struct intel_dsb *dsb)
if (!HAS_DSB(i915))
return;
if (WARN_ON(atomic_read(&dsb->refcount) == 0))
if (WARN_ON(dsb->refcount == 0))
return;
if (atomic_dec_and_test(&dsb->refcount)) {
if (--dsb->refcount == 0) {
i915_vma_unpin_and_release(&dsb->vma, I915_VMA_RELEASE_MAP);
dsb->cmd_buf = NULL;
dsb->free_pos = 0;

Просмотреть файл

@ -22,7 +22,7 @@ enum dsb_id {
};
struct intel_dsb {
atomic_t refcount;
long refcount;
enum dsb_id id;
u32 *cmd_buf;
struct i915_vma *vma;

Просмотреть файл

@ -178,9 +178,9 @@ static void intel_dvo_get_config(struct intel_encoder *encoder,
else
flags |= DRM_MODE_FLAG_NVSYNC;
pipe_config->base.adjusted_mode.flags |= flags;
pipe_config->hw.adjusted_mode.flags |= flags;
pipe_config->base.adjusted_mode.crtc_clock = pipe_config->port_clock;
pipe_config->hw.adjusted_mode.crtc_clock = pipe_config->port_clock;
}
static void intel_disable_dvo(struct intel_encoder *encoder,
@ -207,8 +207,8 @@ static void intel_enable_dvo(struct intel_encoder *encoder,
u32 temp = I915_READ(dvo_reg);
intel_dvo->dev.dev_ops->mode_set(&intel_dvo->dev,
&pipe_config->base.mode,
&pipe_config->base.adjusted_mode);
&pipe_config->hw.mode,
&pipe_config->hw.adjusted_mode);
I915_WRITE(dvo_reg, temp | DVO_ENABLE);
I915_READ(dvo_reg);
@ -253,7 +253,7 @@ static int intel_dvo_compute_config(struct intel_encoder *encoder,
struct intel_dvo *intel_dvo = enc_to_dvo(encoder);
const struct drm_display_mode *fixed_mode =
intel_dvo->attached_connector->panel.fixed_mode;
struct drm_display_mode *adjusted_mode = &pipe_config->base.adjusted_mode;
struct drm_display_mode *adjusted_mode = &pipe_config->hw.adjusted_mode;
/*
* If we have timings from the BIOS for the panel, put them in
@ -277,8 +277,8 @@ static void intel_dvo_pre_enable(struct intel_encoder *encoder,
const struct drm_connector_state *conn_state)
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_crtc *crtc = to_intel_crtc(pipe_config->base.crtc);
const struct drm_display_mode *adjusted_mode = &pipe_config->base.adjusted_mode;
struct intel_crtc *crtc = to_intel_crtc(pipe_config->uapi.crtc);
const struct drm_display_mode *adjusted_mode = &pipe_config->hw.adjusted_mode;
struct intel_dvo *intel_dvo = enc_to_dvo(encoder);
enum pipe pipe = crtc->pipe;
u32 dvo_val;

Просмотреть файл

@ -50,11 +50,6 @@ static inline bool fbc_supported(struct drm_i915_private *dev_priv)
return HAS_FBC(dev_priv);
}
static inline bool no_fbc_on_multiple_pipes(struct drm_i915_private *dev_priv)
{
return INTEL_GEN(dev_priv) <= 3;
}
/*
* In some platforms where the CRTC's x:0/y:0 coordinates doesn't match the
* frontbuffer's x:0/y:0 coordinates we lie to the hardware about the plane's
@ -73,7 +68,7 @@ static unsigned int get_crtc_fence_y_offset(struct intel_fbc *fbc)
* write to the PLANE_SIZE register. For BDW-, the hardware looks at the value
* we wrote to PIPESRC.
*/
static void intel_fbc_get_plane_source_size(struct intel_fbc_state_cache *cache,
static void intel_fbc_get_plane_source_size(const struct intel_fbc_state_cache *cache,
int *width, int *height)
{
if (width)
@ -83,7 +78,7 @@ static void intel_fbc_get_plane_source_size(struct intel_fbc_state_cache *cache,
}
static int intel_fbc_calculate_cfb_size(struct drm_i915_private *dev_priv,
struct intel_fbc_state_cache *cache)
const struct intel_fbc_state_cache *cache)
{
int lines;
@ -143,8 +138,10 @@ static void i8xx_fbc_activate(struct drm_i915_private *dev_priv)
u32 fbc_ctl2;
/* Set it up... */
fbc_ctl2 = FBC_CTL_FENCE_DBL | FBC_CTL_IDLE_IMM | FBC_CTL_CPU_FENCE;
fbc_ctl2 = FBC_CTL_FENCE_DBL | FBC_CTL_IDLE_IMM;
fbc_ctl2 |= FBC_CTL_PLANE(params->crtc.i9xx_plane);
if (params->fence_id >= 0)
fbc_ctl2 |= FBC_CTL_CPU_FENCE;
I915_WRITE(FBC_CONTROL2, fbc_ctl2);
I915_WRITE(FBC_FENCE_OFF, params->crtc.fence_y_offset);
}
@ -156,7 +153,8 @@ static void i8xx_fbc_activate(struct drm_i915_private *dev_priv)
if (IS_I945GM(dev_priv))
fbc_ctl |= FBC_CTL_C3_IDLE; /* 945 needs special SR handling */
fbc_ctl |= (cfb_pitch & 0xff) << FBC_CTL_STRIDE_SHIFT;
fbc_ctl |= params->vma->fence->id;
if (params->fence_id >= 0)
fbc_ctl |= params->fence_id;
I915_WRITE(FBC_CONTROL, fbc_ctl);
}
@ -176,8 +174,8 @@ static void g4x_fbc_activate(struct drm_i915_private *dev_priv)
else
dpfc_ctl |= DPFC_CTL_LIMIT_1X;
if (params->flags & PLANE_HAS_FENCE) {
dpfc_ctl |= DPFC_CTL_FENCE_EN | params->vma->fence->id;
if (params->fence_id >= 0) {
dpfc_ctl |= DPFC_CTL_FENCE_EN | params->fence_id;
I915_WRITE(DPFC_FENCE_YOFF, params->crtc.fence_y_offset);
} else {
I915_WRITE(DPFC_FENCE_YOFF, 0);
@ -234,14 +232,14 @@ static void ilk_fbc_activate(struct drm_i915_private *dev_priv)
break;
}
if (params->flags & PLANE_HAS_FENCE) {
if (params->fence_id >= 0) {
dpfc_ctl |= DPFC_CTL_FENCE_EN;
if (IS_GEN(dev_priv, 5))
dpfc_ctl |= params->vma->fence->id;
dpfc_ctl |= params->fence_id;
if (IS_GEN(dev_priv, 6)) {
I915_WRITE(SNB_DPFC_CTL_SA,
SNB_CPU_FENCE_ENABLE |
params->vma->fence->id);
params->fence_id);
I915_WRITE(DPFC_CPU_FENCE_OFFSET,
params->crtc.fence_y_offset);
}
@ -253,8 +251,6 @@ static void ilk_fbc_activate(struct drm_i915_private *dev_priv)
}
I915_WRITE(ILK_DPFC_FENCE_YOFF, params->crtc.fence_y_offset);
I915_WRITE(ILK_FBC_RT_BASE,
i915_ggtt_offset(params->vma) | ILK_FBC_RT_VALID);
/* enable it... */
I915_WRITE(ILK_DPFC_CONTROL, dpfc_ctl | DPFC_CTL_EN);
@ -285,13 +281,12 @@ static void gen7_fbc_activate(struct drm_i915_private *dev_priv)
int threshold = dev_priv->fbc.threshold;
/* Display WA #0529: skl, kbl, bxt. */
if (IS_GEN(dev_priv, 9) && !IS_GEMINILAKE(dev_priv)) {
if (IS_GEN9_BC(dev_priv) || IS_BROXTON(dev_priv)) {
u32 val = I915_READ(CHICKEN_MISC_4);
val &= ~(FBC_STRIDE_OVERRIDE | FBC_STRIDE_MASK);
if (i915_gem_object_get_tiling(params->vma->obj) !=
I915_TILING_X)
if (params->gen9_wa_cfb_stride)
val |= FBC_STRIDE_OVERRIDE | params->gen9_wa_cfb_stride;
I915_WRITE(CHICKEN_MISC_4, val);
@ -317,11 +312,11 @@ static void gen7_fbc_activate(struct drm_i915_private *dev_priv)
break;
}
if (params->flags & PLANE_HAS_FENCE) {
if (params->fence_id >= 0) {
dpfc_ctl |= IVB_DPFC_CTL_FENCE_EN;
I915_WRITE(SNB_DPFC_CTL_SA,
SNB_CPU_FENCE_ENABLE |
params->vma->fence->id);
params->fence_id);
I915_WRITE(DPFC_CPU_FENCE_OFFSET, params->crtc.fence_y_offset);
} else {
I915_WRITE(SNB_DPFC_CTL_SA,0);
@ -367,6 +362,7 @@ static void intel_fbc_hw_activate(struct drm_i915_private *dev_priv)
struct intel_fbc *fbc = &dev_priv->fbc;
fbc->active = true;
fbc->activated = true;
if (INTEL_GEN(dev_priv) >= 7)
gen7_fbc_activate(dev_priv);
@ -419,29 +415,10 @@ static void intel_fbc_deactivate(struct drm_i915_private *dev_priv,
fbc->no_fbc_reason = reason;
}
static bool multiple_pipes_ok(struct intel_crtc *crtc,
struct intel_plane_state *plane_state)
{
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
struct intel_fbc *fbc = &dev_priv->fbc;
enum pipe pipe = crtc->pipe;
/* Don't even bother tracking anything we don't need. */
if (!no_fbc_on_multiple_pipes(dev_priv))
return true;
if (plane_state->base.visible)
fbc->visible_pipes_mask |= (1 << pipe);
else
fbc->visible_pipes_mask &= ~(1 << pipe);
return (fbc->visible_pipes_mask & ~(1 << pipe)) != 0;
}
static int find_compression_threshold(struct drm_i915_private *dev_priv,
struct drm_mm_node *node,
int size,
int fb_cpp)
unsigned int size,
unsigned int fb_cpp)
{
int compression_threshold = 1;
int ret;
@ -487,18 +464,15 @@ again:
}
}
static int intel_fbc_alloc_cfb(struct intel_crtc *crtc)
static int intel_fbc_alloc_cfb(struct drm_i915_private *dev_priv,
unsigned int size, unsigned int fb_cpp)
{
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
struct intel_fbc *fbc = &dev_priv->fbc;
struct drm_mm_node *uninitialized_var(compressed_llb);
int size, fb_cpp, ret;
int ret;
WARN_ON(drm_mm_node_allocated(&fbc->compressed_fb));
size = intel_fbc_calculate_cfb_size(dev_priv, &fbc->state_cache);
fb_cpp = fbc->state_cache.fb.format->cpp[0];
ret = find_compression_threshold(dev_priv, &fbc->compressed_fb,
size, fb_cpp);
if (!ret)
@ -656,46 +630,55 @@ static bool intel_fbc_hw_tracking_covers_screen(struct intel_crtc *crtc)
}
static void intel_fbc_update_state_cache(struct intel_crtc *crtc,
struct intel_crtc_state *crtc_state,
struct intel_plane_state *plane_state)
const struct intel_crtc_state *crtc_state,
const struct intel_plane_state *plane_state)
{
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
struct intel_fbc *fbc = &dev_priv->fbc;
struct intel_fbc_state_cache *cache = &fbc->state_cache;
struct drm_framebuffer *fb = plane_state->base.fb;
struct drm_framebuffer *fb = plane_state->hw.fb;
cache->vma = NULL;
cache->flags = 0;
cache->plane.visible = plane_state->uapi.visible;
if (!cache->plane.visible)
return;
cache->crtc.mode_flags = crtc_state->base.adjusted_mode.flags;
cache->crtc.mode_flags = crtc_state->hw.adjusted_mode.flags;
if (IS_HASWELL(dev_priv) || IS_BROADWELL(dev_priv))
cache->crtc.hsw_bdw_pixel_rate = crtc_state->pixel_rate;
cache->plane.rotation = plane_state->base.rotation;
cache->plane.rotation = plane_state->hw.rotation;
/*
* Src coordinates are already rotated by 270 degrees for
* the 90/270 degree plane rotation cases (to match the
* GTT mapping), hence no need to account for rotation here.
*/
cache->plane.src_w = drm_rect_width(&plane_state->base.src) >> 16;
cache->plane.src_h = drm_rect_height(&plane_state->base.src) >> 16;
cache->plane.visible = plane_state->base.visible;
cache->plane.src_w = drm_rect_width(&plane_state->uapi.src) >> 16;
cache->plane.src_h = drm_rect_height(&plane_state->uapi.src) >> 16;
cache->plane.adjusted_x = plane_state->color_plane[0].x;
cache->plane.adjusted_y = plane_state->color_plane[0].y;
cache->plane.y = plane_state->base.src.y1 >> 16;
cache->plane.y = plane_state->uapi.src.y1 >> 16;
cache->plane.pixel_blend_mode = plane_state->base.pixel_blend_mode;
if (!cache->plane.visible)
return;
cache->plane.pixel_blend_mode = plane_state->hw.pixel_blend_mode;
cache->fb.format = fb->format;
cache->fb.stride = fb->pitches[0];
cache->vma = plane_state->vma;
cache->flags = plane_state->flags;
if (WARN_ON(cache->flags & PLANE_HAS_FENCE && !cache->vma->fence))
cache->flags &= ~PLANE_HAS_FENCE;
WARN_ON(plane_state->flags & PLANE_HAS_FENCE &&
!plane_state->vma->fence);
if (plane_state->flags & PLANE_HAS_FENCE &&
plane_state->vma->fence)
cache->fence_id = plane_state->vma->fence->id;
else
cache->fence_id = -1;
}
static bool intel_fbc_cfb_size_changed(struct drm_i915_private *dev_priv)
{
struct intel_fbc *fbc = &dev_priv->fbc;
return intel_fbc_calculate_cfb_size(dev_priv, &fbc->state_cache) >
fbc->compressed_fb.size * fbc->threshold;
}
static bool intel_fbc_can_activate(struct intel_crtc *crtc)
@ -704,6 +687,11 @@ static bool intel_fbc_can_activate(struct intel_crtc *crtc)
struct intel_fbc *fbc = &dev_priv->fbc;
struct intel_fbc_state_cache *cache = &fbc->state_cache;
if (!cache->plane.visible) {
fbc->no_fbc_reason = "primary plane not visible";
return false;
}
/* We don't need to use a state cache here since this information is
* global for all CRTC.
*/
@ -712,11 +700,6 @@ static bool intel_fbc_can_activate(struct intel_crtc *crtc)
return false;
}
if (!cache->vma) {
fbc->no_fbc_reason = "primary plane not visible";
return false;
}
if (cache->crtc.mode_flags & DRM_MODE_FLAG_INTERLACE) {
fbc->no_fbc_reason = "incompatible mode";
return false;
@ -740,7 +723,7 @@ static bool intel_fbc_can_activate(struct intel_crtc *crtc)
* For now this will effecively disable FBC with 90/270 degree
* rotation.
*/
if (!(cache->flags & PLANE_HAS_FENCE)) {
if (cache->fence_id < 0) {
fbc->no_fbc_reason = "framebuffer not tiled or fenced";
return false;
}
@ -783,8 +766,7 @@ static bool intel_fbc_can_activate(struct intel_crtc *crtc)
* we didn't get any invalidate/deactivate calls, but this would require
* a lot of tracking just for a specific case. If we conclude it's an
* important case, we can implement it later. */
if (intel_fbc_calculate_cfb_size(dev_priv, &fbc->state_cache) >
fbc->compressed_fb.size * fbc->threshold) {
if (intel_fbc_cfb_size_changed(dev_priv)) {
fbc->no_fbc_reason = "CFB requirements changed";
return false;
}
@ -794,7 +776,7 @@ static bool intel_fbc_can_activate(struct intel_crtc *crtc)
* having a Y offset that isn't divisible by 4 causes FIFO underrun
* and screen flicker.
*/
if (IS_GEN_RANGE(dev_priv, 9, 10) &&
if (INTEL_GEN(dev_priv) >= 9 &&
(fbc->state_cache.plane.adjusted_y & 3)) {
fbc->no_fbc_reason = "plane Y offset is misaligned";
return false;
@ -837,8 +819,7 @@ static void intel_fbc_get_reg_params(struct intel_crtc *crtc,
* zero. */
memset(params, 0, sizeof(*params));
params->vma = cache->vma;
params->flags = cache->flags;
params->fence_id = cache->fence_id;
params->crtc.pipe = crtc->pipe;
params->crtc.i9xx_plane = to_intel_plane(crtc->base.primary)->i9xx_plane;
@ -849,39 +830,88 @@ static void intel_fbc_get_reg_params(struct intel_crtc *crtc,
params->cfb_size = intel_fbc_calculate_cfb_size(dev_priv, cache);
if (IS_GEN(dev_priv, 9) && !IS_GEMINILAKE(dev_priv))
params->gen9_wa_cfb_stride = DIV_ROUND_UP(cache->plane.src_w,
32 * fbc->threshold) * 8;
params->gen9_wa_cfb_stride = cache->gen9_wa_cfb_stride;
params->plane_visible = cache->plane.visible;
}
void intel_fbc_pre_update(struct intel_crtc *crtc,
struct intel_crtc_state *crtc_state,
struct intel_plane_state *plane_state)
static bool intel_fbc_can_flip_nuke(const struct intel_crtc_state *crtc_state)
{
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
const struct intel_fbc *fbc = &dev_priv->fbc;
const struct intel_fbc_state_cache *cache = &fbc->state_cache;
const struct intel_fbc_reg_params *params = &fbc->params;
if (drm_atomic_crtc_needs_modeset(&crtc_state->uapi))
return false;
if (!params->plane_visible)
return false;
if (!intel_fbc_can_activate(crtc))
return false;
if (params->fb.format != cache->fb.format)
return false;
if (params->fb.stride != cache->fb.stride)
return false;
if (params->cfb_size != intel_fbc_calculate_cfb_size(dev_priv, cache))
return false;
if (params->gen9_wa_cfb_stride != cache->gen9_wa_cfb_stride)
return false;
return true;
}
bool intel_fbc_pre_update(struct intel_crtc *crtc,
const struct intel_crtc_state *crtc_state,
const struct intel_plane_state *plane_state)
{
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
struct intel_fbc *fbc = &dev_priv->fbc;
const char *reason = "update pending";
bool need_vblank_wait = false;
if (!fbc_supported(dev_priv))
return;
return need_vblank_wait;
mutex_lock(&fbc->lock);
if (!multiple_pipes_ok(crtc, plane_state)) {
reason = "more than one pipe active";
goto deactivate;
}
if (!fbc->enabled || fbc->crtc != crtc)
if (fbc->crtc != crtc)
goto unlock;
intel_fbc_update_state_cache(crtc, crtc_state, plane_state);
fbc->flip_pending = true;
deactivate:
intel_fbc_deactivate(dev_priv, reason);
if (!intel_fbc_can_flip_nuke(crtc_state)) {
intel_fbc_deactivate(dev_priv, reason);
/*
* Display WA #1198: glk+
* Need an extra vblank wait between FBC disable and most plane
* updates. Bspec says this is only needed for plane disable, but
* that is not true. Touching most plane registers will cause the
* corruption to appear. Also SKL/derivatives do not seem to be
* affected.
*
* TODO: could optimize this a bit by sampling the frame
* counter when we disable FBC (if it was already done earlier)
* and skipping the extra vblank wait before the plane update
* if at least one frame has already passed.
*/
if (fbc->activated &&
(INTEL_GEN(dev_priv) >= 10 || IS_GEMINILAKE(dev_priv)))
need_vblank_wait = true;
fbc->activated = false;
}
unlock:
mutex_unlock(&fbc->lock);
return need_vblank_wait;
}
/**
@ -897,14 +927,13 @@ static void __intel_fbc_disable(struct drm_i915_private *dev_priv)
struct intel_crtc *crtc = fbc->crtc;
WARN_ON(!mutex_is_locked(&fbc->lock));
WARN_ON(!fbc->enabled);
WARN_ON(!fbc->crtc);
WARN_ON(fbc->active);
DRM_DEBUG_KMS("Disabling FBC on pipe %c\n", pipe_name(crtc->pipe));
__intel_fbc_cleanup_cfb(dev_priv);
fbc->enabled = false;
fbc->crtc = NULL;
}
@ -915,11 +944,10 @@ static void __intel_fbc_post_update(struct intel_crtc *crtc)
WARN_ON(!mutex_is_locked(&fbc->lock));
if (!fbc->enabled || fbc->crtc != crtc)
if (fbc->crtc != crtc)
return;
fbc->flip_pending = false;
WARN_ON(fbc->active);
if (!i915_modparams.enable_fbc) {
intel_fbc_deactivate(dev_priv, "disabled at runtime per module param");
@ -933,10 +961,9 @@ static void __intel_fbc_post_update(struct intel_crtc *crtc)
if (!intel_fbc_can_activate(crtc))
return;
if (!fbc->busy_bits) {
intel_fbc_deactivate(dev_priv, "FBC enabled (active or scheduled)");
if (!fbc->busy_bits)
intel_fbc_hw_activate(dev_priv);
} else
else
intel_fbc_deactivate(dev_priv, "frontbuffer write");
}
@ -955,7 +982,7 @@ void intel_fbc_post_update(struct intel_crtc *crtc)
static unsigned int intel_fbc_get_frontbuffer_bit(struct intel_fbc *fbc)
{
if (fbc->enabled)
if (fbc->crtc)
return to_intel_plane(fbc->crtc->base.primary)->frontbuffer_bit;
else
return fbc->possible_framebuffer_bits;
@ -977,7 +1004,7 @@ void intel_fbc_invalidate(struct drm_i915_private *dev_priv,
fbc->busy_bits |= intel_fbc_get_frontbuffer_bit(fbc) & frontbuffer_bits;
if (fbc->enabled && fbc->busy_bits)
if (fbc->crtc && fbc->busy_bits)
intel_fbc_deactivate(dev_priv, "frontbuffer write");
mutex_unlock(&fbc->lock);
@ -998,7 +1025,7 @@ void intel_fbc_flush(struct drm_i915_private *dev_priv,
if (origin == ORIGIN_GTT || origin == ORIGIN_FLIP)
goto out;
if (!fbc->busy_bits && fbc->enabled &&
if (!fbc->busy_bits && fbc->crtc &&
(frontbuffer_bits & intel_fbc_get_frontbuffer_bit(fbc))) {
if (fbc->active)
intel_fbc_recompress(dev_priv);
@ -1047,12 +1074,12 @@ void intel_fbc_choose_crtc(struct drm_i915_private *dev_priv,
* to pipe or plane A. */
for_each_new_intel_plane_in_state(state, plane, plane_state, i) {
struct intel_crtc_state *crtc_state;
struct intel_crtc *crtc = to_intel_crtc(plane_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(plane_state->hw.crtc);
if (!plane->has_fbc)
continue;
if (!plane_state->base.visible)
if (!plane_state->uapi.visible)
continue;
crtc_state = intel_atomic_get_new_crtc_state(state, crtc);
@ -1081,42 +1108,53 @@ out:
* intel_fbc_disable in the middle, as long as it is deactivated.
*/
void intel_fbc_enable(struct intel_crtc *crtc,
struct intel_crtc_state *crtc_state,
struct intel_plane_state *plane_state)
const struct intel_crtc_state *crtc_state,
const struct intel_plane_state *plane_state)
{
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
struct intel_fbc *fbc = &dev_priv->fbc;
struct intel_fbc_state_cache *cache = &fbc->state_cache;
const struct drm_framebuffer *fb = plane_state->hw.fb;
if (!fbc_supported(dev_priv))
return;
mutex_lock(&fbc->lock);
if (fbc->enabled) {
WARN_ON(fbc->crtc == NULL);
if (fbc->crtc == crtc) {
WARN_ON(!crtc_state->enable_fbc);
WARN_ON(fbc->active);
}
goto out;
if (fbc->crtc) {
if (fbc->crtc != crtc ||
!intel_fbc_cfb_size_changed(dev_priv))
goto out;
__intel_fbc_disable(dev_priv);
}
if (!crtc_state->enable_fbc)
goto out;
WARN_ON(fbc->active);
WARN_ON(fbc->crtc != NULL);
intel_fbc_update_state_cache(crtc, crtc_state, plane_state);
if (intel_fbc_alloc_cfb(crtc)) {
/* FIXME crtc_state->enable_fbc lies :( */
if (!cache->plane.visible)
goto out;
if (intel_fbc_alloc_cfb(dev_priv,
intel_fbc_calculate_cfb_size(dev_priv, cache),
fb->format->cpp[0])) {
cache->plane.visible = false;
fbc->no_fbc_reason = "not enough stolen memory";
goto out;
}
if ((IS_GEN9_BC(dev_priv) || IS_BROXTON(dev_priv)) &&
fb->modifier != I915_FORMAT_MOD_X_TILED)
cache->gen9_wa_cfb_stride =
DIV_ROUND_UP(cache->plane.src_w, 32 * fbc->threshold) * 8;
else
cache->gen9_wa_cfb_stride = 0;
DRM_DEBUG_KMS("Enabling FBC on pipe %c\n", pipe_name(crtc->pipe));
fbc->no_fbc_reason = "FBC enabled but not active yet\n";
fbc->enabled = true;
fbc->crtc = crtc;
out:
mutex_unlock(&fbc->lock);
@ -1156,7 +1194,7 @@ void intel_fbc_global_disable(struct drm_i915_private *dev_priv)
return;
mutex_lock(&fbc->lock);
if (fbc->enabled) {
if (fbc->crtc) {
WARN_ON(fbc->crtc->active);
__intel_fbc_disable(dev_priv);
}
@ -1172,7 +1210,7 @@ static void intel_fbc_underrun_work_fn(struct work_struct *work)
mutex_lock(&fbc->lock);
/* Maybe we were scheduled twice. */
if (fbc->underrun_detected || !fbc->enabled)
if (fbc->underrun_detected || !fbc->crtc)
goto out;
DRM_DEBUG_KMS("Disabling FBC due to FIFO underrun.\n");
@ -1244,28 +1282,6 @@ void intel_fbc_handle_fifo_underrun_irq(struct drm_i915_private *dev_priv)
schedule_work(&fbc->underrun_work);
}
/**
* intel_fbc_init_pipe_state - initialize FBC's CRTC visibility tracking
* @dev_priv: i915 device instance
*
* The FBC code needs to track CRTC visibility since the older platforms can't
* have FBC enabled while multiple pipes are used. This function does the
* initial setup at driver load to make sure FBC is matching the real hardware.
*/
void intel_fbc_init_pipe_state(struct drm_i915_private *dev_priv)
{
struct intel_crtc *crtc;
/* Don't even bother tracking anything if we don't need. */
if (!no_fbc_on_multiple_pipes(dev_priv))
return;
for_each_intel_crtc(&dev_priv->drm, crtc)
if (intel_crtc_active(crtc) &&
crtc->base.primary->state->visible)
dev_priv->fbc.visible_pipes_mask |= (1 << crtc->pipe);
}
/*
* The DDX driver changes its behavior depending on the value it reads from
* i915.enable_fbc, so sanitize it by translating the default value into either
@ -1283,10 +1299,6 @@ static int intel_sanitize_fbc_option(struct drm_i915_private *dev_priv)
if (!HAS_FBC(dev_priv))
return 0;
/* https://bugs.freedesktop.org/show_bug.cgi?id=108085 */
if (INTEL_GEN(dev_priv) >= 10 || IS_GEMINILAKE(dev_priv))
return 0;
if (IS_BROADWELL(dev_priv) || INTEL_GEN(dev_priv) >= 9)
return 1;
@ -1317,7 +1329,6 @@ void intel_fbc_init(struct drm_i915_private *dev_priv)
INIT_WORK(&fbc->underrun_work, intel_fbc_underrun_work_fn);
mutex_init(&fbc->lock);
fbc->enabled = false;
fbc->active = false;
if (!drm_mm_initialized(&dev_priv->mm.stolen))

Просмотреть файл

@ -19,15 +19,14 @@ struct intel_plane_state;
void intel_fbc_choose_crtc(struct drm_i915_private *dev_priv,
struct intel_atomic_state *state);
bool intel_fbc_is_active(struct drm_i915_private *dev_priv);
void intel_fbc_pre_update(struct intel_crtc *crtc,
struct intel_crtc_state *crtc_state,
struct intel_plane_state *plane_state);
bool intel_fbc_pre_update(struct intel_crtc *crtc,
const struct intel_crtc_state *crtc_state,
const struct intel_plane_state *plane_state);
void intel_fbc_post_update(struct intel_crtc *crtc);
void intel_fbc_init(struct drm_i915_private *dev_priv);
void intel_fbc_init_pipe_state(struct drm_i915_private *dev_priv);
void intel_fbc_enable(struct intel_crtc *crtc,
struct intel_crtc_state *crtc_state,
struct intel_plane_state *plane_state);
const struct intel_crtc_state *crtc_state,
const struct intel_plane_state *plane_state);
void intel_fbc_disable(struct intel_crtc *crtc);
void intel_fbc_global_disable(struct drm_i915_private *dev_priv);
void intel_fbc_invalidate(struct drm_i915_private *dev_priv,

Просмотреть файл

@ -229,11 +229,11 @@ static void frontbuffer_release(struct kref *ref)
vma->display_alignment = I915_GTT_MIN_ALIGNMENT;
spin_unlock(&obj->vma.lock);
obj->frontbuffer = NULL;
RCU_INIT_POINTER(obj->frontbuffer, NULL);
spin_unlock(&to_i915(obj->base.dev)->fb_tracking.lock);
i915_gem_object_put(obj);
kfree(front);
kfree_rcu(front, rcu);
}
struct intel_frontbuffer *
@ -242,11 +242,7 @@ intel_frontbuffer_get(struct drm_i915_gem_object *obj)
struct drm_i915_private *i915 = to_i915(obj->base.dev);
struct intel_frontbuffer *front;
spin_lock(&i915->fb_tracking.lock);
front = obj->frontbuffer;
if (front)
kref_get(&front->ref);
spin_unlock(&i915->fb_tracking.lock);
front = __intel_frontbuffer_get(obj);
if (front)
return front;
@ -262,13 +258,13 @@ intel_frontbuffer_get(struct drm_i915_gem_object *obj)
i915_active_may_sleep(frontbuffer_retire));
spin_lock(&i915->fb_tracking.lock);
if (obj->frontbuffer) {
if (rcu_access_pointer(obj->frontbuffer)) {
kfree(front);
front = obj->frontbuffer;
front = rcu_dereference_protected(obj->frontbuffer, true);
kref_get(&front->ref);
} else {
i915_gem_object_get(obj);
obj->frontbuffer = front;
rcu_assign_pointer(obj->frontbuffer, front);
}
spin_unlock(&i915->fb_tracking.lock);

Просмотреть файл

@ -27,10 +27,10 @@
#include <linux/atomic.h>
#include <linux/kref.h>
#include "gem/i915_gem_object_types.h"
#include "i915_active.h"
struct drm_i915_private;
struct drm_i915_gem_object;
enum fb_op_origin {
ORIGIN_GTT,
@ -45,6 +45,7 @@ struct intel_frontbuffer {
atomic_t bits;
struct i915_active write;
struct drm_i915_gem_object *obj;
struct rcu_head rcu;
};
void intel_frontbuffer_flip_prepare(struct drm_i915_private *i915,
@ -54,6 +55,35 @@ void intel_frontbuffer_flip_complete(struct drm_i915_private *i915,
void intel_frontbuffer_flip(struct drm_i915_private *i915,
unsigned frontbuffer_bits);
void intel_frontbuffer_put(struct intel_frontbuffer *front);
static inline struct intel_frontbuffer *
__intel_frontbuffer_get(const struct drm_i915_gem_object *obj)
{
struct intel_frontbuffer *front;
if (likely(!rcu_access_pointer(obj->frontbuffer)))
return NULL;
rcu_read_lock();
do {
front = rcu_dereference(obj->frontbuffer);
if (!front)
break;
if (unlikely(!kref_get_unless_zero(&front->ref)))
continue;
if (likely(front == rcu_access_pointer(obj->frontbuffer)))
break;
intel_frontbuffer_put(front);
} while (1);
rcu_read_unlock();
return front;
}
struct intel_frontbuffer *
intel_frontbuffer_get(struct drm_i915_gem_object *obj);
@ -119,6 +149,4 @@ void intel_frontbuffer_track(struct intel_frontbuffer *old,
struct intel_frontbuffer *new,
unsigned int frontbuffer_bits);
void intel_frontbuffer_put(struct intel_frontbuffer *front);
#endif /* __INTEL_FRONTBUFFER_H__ */

Просмотреть файл

@ -1870,7 +1870,7 @@ static bool is_hdcp2_supported(struct drm_i915_private *dev_priv)
return false;
return (INTEL_GEN(dev_priv) >= 10 || IS_GEMINILAKE(dev_priv) ||
IS_KABYLAKE(dev_priv));
IS_KABYLAKE(dev_priv) || IS_COFFEELAKE(dev_priv));
}
void intel_hdcp_component_init(struct drm_i915_private *dev_priv)

Просмотреть файл

@ -285,7 +285,7 @@ static void ibx_write_infoframe(struct intel_encoder *encoder,
{
const u32 *data = frame;
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_crtc *intel_crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *intel_crtc = to_intel_crtc(crtc_state->uapi.crtc);
i915_reg_t reg = TVIDEO_DIP_CTL(intel_crtc->pipe);
u32 val = I915_READ(reg);
int i;
@ -321,7 +321,7 @@ static void ibx_read_infoframe(struct intel_encoder *encoder,
void *frame, ssize_t len)
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
u32 val, *data = frame;
int i;
@ -340,7 +340,7 @@ static u32 ibx_infoframes_enabled(struct intel_encoder *encoder,
const struct intel_crtc_state *pipe_config)
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
enum pipe pipe = to_intel_crtc(pipe_config->base.crtc)->pipe;
enum pipe pipe = to_intel_crtc(pipe_config->uapi.crtc)->pipe;
i915_reg_t reg = TVIDEO_DIP_CTL(pipe);
u32 val = I915_READ(reg);
@ -362,7 +362,7 @@ static void cpt_write_infoframe(struct intel_encoder *encoder,
{
const u32 *data = frame;
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_crtc *intel_crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *intel_crtc = to_intel_crtc(crtc_state->uapi.crtc);
i915_reg_t reg = TVIDEO_DIP_CTL(intel_crtc->pipe);
u32 val = I915_READ(reg);
int i;
@ -401,7 +401,7 @@ static void cpt_read_infoframe(struct intel_encoder *encoder,
void *frame, ssize_t len)
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
u32 val, *data = frame;
int i;
@ -420,7 +420,7 @@ static u32 cpt_infoframes_enabled(struct intel_encoder *encoder,
const struct intel_crtc_state *pipe_config)
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
enum pipe pipe = to_intel_crtc(pipe_config->base.crtc)->pipe;
enum pipe pipe = to_intel_crtc(pipe_config->uapi.crtc)->pipe;
u32 val = I915_READ(TVIDEO_DIP_CTL(pipe));
if ((val & VIDEO_DIP_ENABLE) == 0)
@ -438,7 +438,7 @@ static void vlv_write_infoframe(struct intel_encoder *encoder,
{
const u32 *data = frame;
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_crtc *intel_crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *intel_crtc = to_intel_crtc(crtc_state->uapi.crtc);
i915_reg_t reg = VLV_TVIDEO_DIP_CTL(intel_crtc->pipe);
u32 val = I915_READ(reg);
int i;
@ -474,7 +474,7 @@ static void vlv_read_infoframe(struct intel_encoder *encoder,
void *frame, ssize_t len)
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
u32 val, *data = frame;
int i;
@ -493,7 +493,7 @@ static u32 vlv_infoframes_enabled(struct intel_encoder *encoder,
const struct intel_crtc_state *pipe_config)
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
enum pipe pipe = to_intel_crtc(pipe_config->base.crtc)->pipe;
enum pipe pipe = to_intel_crtc(pipe_config->uapi.crtc)->pipe;
u32 val = I915_READ(VLV_TVIDEO_DIP_CTL(pipe));
if ((val & VIDEO_DIP_ENABLE) == 0)
@ -708,7 +708,7 @@ intel_hdmi_compute_avi_infoframe(struct intel_encoder *encoder,
{
struct hdmi_avi_infoframe *frame = &crtc_state->infoframes.avi.avi;
const struct drm_display_mode *adjusted_mode =
&crtc_state->base.adjusted_mode;
&crtc_state->hw.adjusted_mode;
struct drm_connector *connector = conn_state->connector;
int ret;
@ -804,7 +804,7 @@ intel_hdmi_compute_hdmi_infoframe(struct intel_encoder *encoder,
ret = drm_hdmi_vendor_infoframe_from_display_mode(frame,
conn_state->connector,
&crtc_state->base.adjusted_mode);
&crtc_state->hw.adjusted_mode);
if (WARN_ON(ret))
return false;
@ -965,7 +965,7 @@ static bool intel_hdmi_set_gcp_infoframe(struct intel_encoder *encoder,
const struct drm_connector_state *conn_state)
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
i915_reg_t reg;
if ((crtc_state->infoframes.enable &
@ -990,7 +990,7 @@ void intel_hdmi_read_gcp_infoframe(struct intel_encoder *encoder,
struct intel_crtc_state *crtc_state)
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
i915_reg_t reg;
if ((crtc_state->infoframes.enable &
@ -1027,7 +1027,7 @@ static void intel_hdmi_compute_gcp_infoframe(struct intel_encoder *encoder,
/* Enable default_phase whenever the display mode is suitably aligned */
if (gcp_default_phase_possible(crtc_state->pipe_bpp,
&crtc_state->base.adjusted_mode))
&crtc_state->hw.adjusted_mode))
crtc_state->infoframes.gcp |= GCP_DEFAULT_PHASE_ENABLE;
}
@ -1037,7 +1037,7 @@ static void ibx_set_infoframes(struct intel_encoder *encoder,
const struct drm_connector_state *conn_state)
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_crtc *intel_crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *intel_crtc = to_intel_crtc(crtc_state->uapi.crtc);
struct intel_digital_port *intel_dig_port = enc_to_dig_port(&encoder->base);
struct intel_hdmi *intel_hdmi = &intel_dig_port->hdmi;
i915_reg_t reg = TVIDEO_DIP_CTL(intel_crtc->pipe);
@ -1096,7 +1096,7 @@ static void cpt_set_infoframes(struct intel_encoder *encoder,
const struct drm_connector_state *conn_state)
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_crtc *intel_crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *intel_crtc = to_intel_crtc(crtc_state->uapi.crtc);
struct intel_hdmi *intel_hdmi = enc_to_intel_hdmi(&encoder->base);
i915_reg_t reg = TVIDEO_DIP_CTL(intel_crtc->pipe);
u32 val = I915_READ(reg);
@ -1145,7 +1145,7 @@ static void vlv_set_infoframes(struct intel_encoder *encoder,
const struct drm_connector_state *conn_state)
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_crtc *intel_crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *intel_crtc = to_intel_crtc(crtc_state->uapi.crtc);
struct intel_hdmi *intel_hdmi = enc_to_intel_hdmi(&encoder->base);
i915_reg_t reg = VLV_TVIDEO_DIP_CTL(intel_crtc->pipe);
u32 val = I915_READ(reg);
@ -1736,9 +1736,9 @@ static void intel_hdmi_prepare(struct intel_encoder *encoder,
{
struct drm_device *dev = encoder->base.dev;
struct drm_i915_private *dev_priv = to_i915(dev);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
struct intel_hdmi *intel_hdmi = enc_to_intel_hdmi(&encoder->base);
const struct drm_display_mode *adjusted_mode = &crtc_state->base.adjusted_mode;
const struct drm_display_mode *adjusted_mode = &crtc_state->hw.adjusted_mode;
u32 hdmi_val;
intel_dp_dual_mode_set_tmds_output(intel_hdmi, true);
@ -1829,7 +1829,7 @@ static void intel_hdmi_get_config(struct intel_encoder *encoder,
tmp & HDMI_COLOR_RANGE_16_235)
pipe_config->limited_color_range = true;
pipe_config->base.adjusted_mode.flags |= flags;
pipe_config->hw.adjusted_mode.flags |= flags;
if ((tmp & SDVO_COLOR_FORMAT_MASK) == HDMI_COLOR_FORMAT_12bpc)
dotclock = pipe_config->port_clock * 2 / 3;
@ -1839,7 +1839,7 @@ static void intel_hdmi_get_config(struct intel_encoder *encoder,
if (pipe_config->pixel_multiplier)
dotclock /= pipe_config->pixel_multiplier;
pipe_config->base.adjusted_mode.crtc_clock = dotclock;
pipe_config->hw.adjusted_mode.crtc_clock = dotclock;
pipe_config->lane_count = 4;
@ -1860,7 +1860,7 @@ static void intel_enable_hdmi_audio(struct intel_encoder *encoder,
const struct intel_crtc_state *pipe_config,
const struct drm_connector_state *conn_state)
{
struct intel_crtc *crtc = to_intel_crtc(pipe_config->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(pipe_config->uapi.crtc);
WARN_ON(!pipe_config->has_hdmi_sink);
DRM_DEBUG_DRIVER("Enabling HDMI audio on pipe %c\n",
@ -1946,7 +1946,7 @@ static void cpt_enable_hdmi(struct intel_encoder *encoder,
{
struct drm_device *dev = encoder->base.dev;
struct drm_i915_private *dev_priv = to_i915(dev);
struct intel_crtc *crtc = to_intel_crtc(pipe_config->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(pipe_config->uapi.crtc);
struct intel_hdmi *intel_hdmi = enc_to_intel_hdmi(&encoder->base);
enum pipe pipe = crtc->pipe;
u32 temp;
@ -2010,7 +2010,7 @@ static void intel_disable_hdmi(struct intel_encoder *encoder,
struct intel_hdmi *intel_hdmi = enc_to_intel_hdmi(&encoder->base);
struct intel_digital_port *intel_dig_port =
hdmi_to_dig_port(intel_hdmi);
struct intel_crtc *crtc = to_intel_crtc(old_crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(old_crtc_state->uapi.crtc);
u32 temp;
temp = I915_READ(intel_hdmi->hdmi_reg);
@ -2210,12 +2210,12 @@ static bool hdmi_deep_color_possible(const struct intel_crtc_state *crtc_state,
int bpc)
{
struct drm_i915_private *dev_priv =
to_i915(crtc_state->base.crtc->dev);
struct drm_atomic_state *state = crtc_state->base.state;
to_i915(crtc_state->uapi.crtc->dev);
struct drm_atomic_state *state = crtc_state->uapi.state;
struct drm_connector_state *connector_state;
struct drm_connector *connector;
const struct drm_display_mode *adjusted_mode =
&crtc_state->base.adjusted_mode;
&crtc_state->hw.adjusted_mode;
int i;
if (HAS_GMCH(dev_priv))
@ -2240,7 +2240,7 @@ static bool hdmi_deep_color_possible(const struct intel_crtc_state *crtc_state,
for_each_new_connector_in_state(state, connector, connector_state, i) {
const struct drm_display_info *info = &connector->display_info;
if (connector_state->crtc != crtc_state->base.crtc)
if (connector_state->crtc != crtc_state->uapi.crtc)
continue;
if (crtc_state->output_format == INTEL_OUTPUT_FORMAT_YCBCR420) {
@ -2281,7 +2281,7 @@ static bool
intel_hdmi_ycbcr420_config(struct drm_connector *connector,
struct intel_crtc_state *config)
{
struct intel_crtc *intel_crtc = to_intel_crtc(config->base.crtc);
struct intel_crtc *intel_crtc = to_intel_crtc(config->uapi.crtc);
if (!connector->ycbcr_420_allowed) {
DRM_ERROR("Platform doesn't support YCBCR420 output\n");
@ -2336,7 +2336,7 @@ static int intel_hdmi_compute_clock(struct intel_encoder *encoder,
{
struct intel_hdmi *intel_hdmi = enc_to_intel_hdmi(&encoder->base);
const struct drm_display_mode *adjusted_mode =
&crtc_state->base.adjusted_mode;
&crtc_state->hw.adjusted_mode;
int bpc, clock = adjusted_mode->crtc_clock;
if (adjusted_mode->flags & DRM_MODE_FLAG_DBLCLK)
@ -2378,7 +2378,7 @@ static bool intel_hdmi_limited_color_range(const struct intel_crtc_state *crtc_s
const struct intel_digital_connector_state *intel_conn_state =
to_intel_digital_connector_state(conn_state);
const struct drm_display_mode *adjusted_mode =
&crtc_state->base.adjusted_mode;
&crtc_state->hw.adjusted_mode;
/*
* Our YCbCr output is always limited range.
@ -2406,7 +2406,7 @@ int intel_hdmi_compute_config(struct intel_encoder *encoder,
{
struct intel_hdmi *intel_hdmi = enc_to_intel_hdmi(&encoder->base);
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct drm_display_mode *adjusted_mode = &pipe_config->base.adjusted_mode;
struct drm_display_mode *adjusted_mode = &pipe_config->hw.adjusted_mode;
struct drm_connector *connector = conn_state->connector;
struct drm_scdc *scdc = &connector->display_info.hdmi.scdc;
struct intel_digital_connector_state *intel_conn_state =
@ -2451,8 +2451,9 @@ int intel_hdmi_compute_config(struct intel_encoder *encoder,
if (ret)
return ret;
/* Set user selected PAR to incoming mode's member */
adjusted_mode->picture_aspect_ratio = conn_state->picture_aspect_ratio;
if (conn_state->picture_aspect_ratio)
adjusted_mode->picture_aspect_ratio =
conn_state->picture_aspect_ratio;
pipe_config->lane_count = 4;
@ -2872,7 +2873,6 @@ intel_hdmi_add_properties(struct intel_hdmi *intel_hdmi, struct drm_connector *c
intel_attach_colorspace_property(connector);
drm_connector_attach_content_type_property(connector);
connector->state->picture_aspect_ratio = HDMI_PICTURE_ASPECT_NONE;
if (INTEL_GEN(dev_priv) >= 10 || IS_GEMINILAKE(dev_priv))
drm_object_attach_property(&connector->base,
@ -3131,20 +3131,29 @@ void intel_hdmi_init_connector(struct intel_digital_port *intel_dig_port,
struct intel_encoder *intel_encoder = &intel_dig_port->base;
struct drm_device *dev = intel_encoder->base.dev;
struct drm_i915_private *dev_priv = to_i915(dev);
struct i2c_adapter *ddc;
enum port port = intel_encoder->port;
struct cec_connector_info conn_info;
DRM_DEBUG_KMS("Adding HDMI connector on [ENCODER:%d:%s]\n",
intel_encoder->base.base.id, intel_encoder->base.name);
if (INTEL_GEN(dev_priv) < 12 && WARN_ON(port == PORT_A))
return;
if (WARN(intel_dig_port->max_lanes < 4,
"Not enough lanes (%d) for HDMI on [ENCODER:%d:%s]\n",
intel_dig_port->max_lanes, intel_encoder->base.base.id,
intel_encoder->base.name))
return;
drm_connector_init(dev, connector, &intel_hdmi_connector_funcs,
DRM_MODE_CONNECTOR_HDMIA);
intel_hdmi->ddc_bus = intel_hdmi_ddc_pin(dev_priv, port);
ddc = intel_gmbus_get_adapter(dev_priv, intel_hdmi->ddc_bus);
drm_connector_init_with_ddc(dev, connector,
&intel_hdmi_connector_funcs,
DRM_MODE_CONNECTOR_HDMIA,
ddc);
drm_connector_helper_add(connector, &intel_hdmi_connector_helper_funcs);
connector->interlace_allowed = 1;
@ -3154,10 +3163,6 @@ void intel_hdmi_init_connector(struct intel_digital_port *intel_dig_port,
if (INTEL_GEN(dev_priv) >= 10 || IS_GEMINILAKE(dev_priv))
connector->ycbcr_420_allowed = true;
intel_hdmi->ddc_bus = intel_hdmi_ddc_pin(dev_priv, port);
if (WARN_ON(port == PORT_A))
return;
intel_encoder->hpd_pin = intel_hpd_pin_default(dev_priv, port);
if (HAS_DDI(dev_priv))

Просмотреть файл

@ -189,7 +189,7 @@ void lspcon_ycbcr420_config(struct drm_connector *connector,
{
const struct drm_display_info *info = &connector->display_info;
const struct drm_display_mode *adjusted_mode =
&crtc_state->base.adjusted_mode;
&crtc_state->hw.adjusted_mode;
if (drm_mode_is_420_only(info, adjusted_mode) &&
connector->ycbcr_420_allowed) {
@ -475,7 +475,7 @@ void lspcon_set_infoframes(struct intel_encoder *encoder,
struct intel_digital_port *dig_port = enc_to_dig_port(&encoder->base);
struct intel_lspcon *lspcon = &dig_port->lspcon;
const struct drm_display_mode *adjusted_mode =
&crtc_state->base.adjusted_mode;
&crtc_state->hw.adjusted_mode;
if (!lspcon->active) {
DRM_ERROR("Writing infoframes while LSPCON disabled ?\n");

Просмотреть файл

@ -135,7 +135,7 @@ static void intel_lvds_get_config(struct intel_encoder *encoder,
else
flags |= DRM_MODE_FLAG_PVSYNC;
pipe_config->base.adjusted_mode.flags |= flags;
pipe_config->hw.adjusted_mode.flags |= flags;
if (INTEL_GEN(dev_priv) < 5)
pipe_config->gmch_pfit.lvds_border_bits =
@ -148,7 +148,7 @@ static void intel_lvds_get_config(struct intel_encoder *encoder,
pipe_config->gmch_pfit.control |= tmp & PANEL_8TO6_DITHER_ENABLE;
}
pipe_config->base.adjusted_mode.crtc_clock = pipe_config->port_clock;
pipe_config->hw.adjusted_mode.crtc_clock = pipe_config->port_clock;
}
static void intel_lvds_pps_get_hw_state(struct drm_i915_private *dev_priv,
@ -230,8 +230,8 @@ static void intel_pre_enable_lvds(struct intel_encoder *encoder,
{
struct intel_lvds_encoder *lvds_encoder = to_lvds_encoder(&encoder->base);
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_crtc *crtc = to_intel_crtc(pipe_config->base.crtc);
const struct drm_display_mode *adjusted_mode = &pipe_config->base.adjusted_mode;
struct intel_crtc *crtc = to_intel_crtc(pipe_config->uapi.crtc);
const struct drm_display_mode *adjusted_mode = &pipe_config->hw.adjusted_mode;
enum pipe pipe = crtc->pipe;
u32 temp;
@ -392,8 +392,8 @@ static int intel_lvds_compute_config(struct intel_encoder *intel_encoder,
to_lvds_encoder(&intel_encoder->base);
struct intel_connector *intel_connector =
lvds_encoder->attached_connector;
struct drm_display_mode *adjusted_mode = &pipe_config->base.adjusted_mode;
struct intel_crtc *intel_crtc = to_intel_crtc(pipe_config->base.crtc);
struct drm_display_mode *adjusted_mode = &pipe_config->hw.adjusted_mode;
struct intel_crtc *intel_crtc = to_intel_crtc(pipe_config->uapi.crtc);
unsigned int lvds_bpp;
/* Should never happen!! */

Просмотреть файл

@ -941,6 +941,13 @@ int intel_opregion_setup(struct drm_i915_private *dev_priv)
if (mboxes & MBOX_ACPI) {
DRM_DEBUG_DRIVER("Public ACPI methods supported\n");
opregion->acpi = base + OPREGION_ACPI_OFFSET;
/*
* Indicate we handle monitor hotplug events ourselves so we do
* not need ACPI notifications for them. Disabling these avoids
* triggering the AML code doing the notifation, which may be
* broken as Windows also seems to disable these.
*/
opregion->acpi->chpd = 1;
}
if (mboxes & MBOX_SWSCI) {

Просмотреть файл

@ -279,12 +279,21 @@ static void intel_overlay_flip_prepare(struct intel_overlay *overlay,
struct i915_vma *vma)
{
enum pipe pipe = overlay->crtc->pipe;
struct intel_frontbuffer *from = NULL, *to = NULL;
WARN_ON(overlay->old_vma);
intel_frontbuffer_track(overlay->vma ? overlay->vma->obj->frontbuffer : NULL,
vma ? vma->obj->frontbuffer : NULL,
INTEL_FRONTBUFFER_OVERLAY(pipe));
if (overlay->vma)
from = intel_frontbuffer_get(overlay->vma->obj);
if (vma)
to = intel_frontbuffer_get(vma->obj);
intel_frontbuffer_track(from, to, INTEL_FRONTBUFFER_OVERLAY(pipe));
if (to)
intel_frontbuffer_put(to);
if (from)
intel_frontbuffer_put(from);
intel_frontbuffer_flip_prepare(overlay->i915,
INTEL_FRONTBUFFER_OVERLAY(pipe));
@ -668,8 +677,8 @@ static void update_colorkey(struct intel_overlay *overlay,
if (overlay->color_key_enabled)
flags |= DST_KEY_ENABLE;
if (state->base.visible)
format = state->base.fb->format->format;
if (state->uapi.visible)
format = state->hw.fb->format->format;
switch (format) {
case DRM_FORMAT_C8:
@ -758,15 +767,13 @@ static int intel_overlay_do_put_image(struct intel_overlay *overlay,
atomic_inc(&dev_priv->gpu_error.pending_fb_pin);
i915_gem_object_lock(new_bo);
vma = i915_gem_object_pin_to_display_plane(new_bo,
0, NULL, PIN_MAPPABLE);
i915_gem_object_unlock(new_bo);
if (IS_ERR(vma)) {
ret = PTR_ERR(vma);
goto out_pin_section;
}
intel_frontbuffer_flush(new_bo->frontbuffer, ORIGIN_DIRTYFB);
i915_gem_object_flush_frontbuffer(new_bo, ORIGIN_DIRTYFB);
if (!overlay->active) {
u32 oconfig;
@ -1326,12 +1333,14 @@ err_put_bo:
void intel_overlay_setup(struct drm_i915_private *dev_priv)
{
struct intel_overlay *overlay;
struct intel_engine_cs *engine;
int ret;
if (!HAS_OVERLAY(dev_priv))
return;
if (!HAS_ENGINE(dev_priv, RCS0))
engine = dev_priv->engine[RCS0];
if (!engine || !engine->kernel_context)
return;
overlay = kzalloc(sizeof(*overlay), GFP_KERNEL);
@ -1339,7 +1348,7 @@ void intel_overlay_setup(struct drm_i915_private *dev_priv)
return;
overlay->i915 = dev_priv;
overlay->context = dev_priv->engine[RCS0]->kernel_context;
overlay->context = engine->kernel_context;
GEM_BUG_ON(!overlay->context);
overlay->color_key = 0x0101fe;

Просмотреть файл

@ -178,7 +178,7 @@ intel_pch_panel_fitting(struct intel_crtc *intel_crtc,
struct intel_crtc_state *pipe_config,
int fitting_mode)
{
const struct drm_display_mode *adjusted_mode = &pipe_config->base.adjusted_mode;
const struct drm_display_mode *adjusted_mode = &pipe_config->hw.adjusted_mode;
int x = 0, y = 0, width = 0, height = 0;
/* Native modes don't need fitting */
@ -300,7 +300,7 @@ static inline u32 panel_fitter_scaling(u32 source, u32 target)
static void i965_scale_aspect(struct intel_crtc_state *pipe_config,
u32 *pfit_control)
{
const struct drm_display_mode *adjusted_mode = &pipe_config->base.adjusted_mode;
const struct drm_display_mode *adjusted_mode = &pipe_config->hw.adjusted_mode;
u32 scaled_width = adjusted_mode->crtc_hdisplay *
pipe_config->pipe_src_h;
u32 scaled_height = pipe_config->pipe_src_w *
@ -321,7 +321,7 @@ static void i9xx_scale_aspect(struct intel_crtc_state *pipe_config,
u32 *pfit_control, u32 *pfit_pgm_ratios,
u32 *border)
{
struct drm_display_mode *adjusted_mode = &pipe_config->base.adjusted_mode;
struct drm_display_mode *adjusted_mode = &pipe_config->hw.adjusted_mode;
u32 scaled_width = adjusted_mode->crtc_hdisplay *
pipe_config->pipe_src_h;
u32 scaled_height = pipe_config->pipe_src_w *
@ -380,7 +380,7 @@ void intel_gmch_panel_fitting(struct intel_crtc *intel_crtc,
{
struct drm_i915_private *dev_priv = to_i915(intel_crtc->base.dev);
u32 pfit_control = 0, pfit_pgm_ratios = 0, border = 0;
struct drm_display_mode *adjusted_mode = &pipe_config->base.adjusted_mode;
struct drm_display_mode *adjusted_mode = &pipe_config->hw.adjusted_mode;
/* Native modes don't need fitting */
if (adjusted_mode->crtc_hdisplay == pipe_config->pipe_src_w &&
@ -1047,7 +1047,7 @@ static void vlv_enable_backlight(const struct intel_crtc_state *crtc_state,
struct intel_connector *connector = to_intel_connector(conn_state->connector);
struct drm_i915_private *dev_priv = to_i915(connector->base.dev);
struct intel_panel *panel = &connector->panel;
enum pipe pipe = to_intel_crtc(crtc_state->base.crtc)->pipe;
enum pipe pipe = to_intel_crtc(crtc_state->uapi.crtc)->pipe;
u32 ctl, ctl2;
ctl2 = I915_READ(VLV_BLC_PWM_CTL2(pipe));
@ -1077,7 +1077,7 @@ static void bxt_enable_backlight(const struct intel_crtc_state *crtc_state,
struct intel_connector *connector = to_intel_connector(conn_state->connector);
struct drm_i915_private *dev_priv = to_i915(connector->base.dev);
struct intel_panel *panel = &connector->panel;
enum pipe pipe = to_intel_crtc(crtc_state->base.crtc)->pipe;
enum pipe pipe = to_intel_crtc(crtc_state->uapi.crtc)->pipe;
u32 pwm_ctl, val;
/* Controller 1 uses the utility pin. */
@ -1189,7 +1189,7 @@ void intel_panel_enable_backlight(const struct intel_crtc_state *crtc_state,
struct intel_connector *connector = to_intel_connector(conn_state->connector);
struct drm_i915_private *dev_priv = to_i915(connector->base.dev);
struct intel_panel *panel = &connector->panel;
enum pipe pipe = to_intel_crtc(crtc_state->base.crtc)->pipe;
enum pipe pipe = to_intel_crtc(crtc_state->uapi.crtc)->pipe;
if (!panel->backlight.present)
return;
@ -1840,13 +1840,22 @@ static int pwm_setup_backlight(struct intel_connector *connector,
enum pipe pipe)
{
struct drm_device *dev = connector->base.dev;
struct drm_i915_private *dev_priv = to_i915(dev);
struct intel_panel *panel = &connector->panel;
const char *desc;
int retval;
/* Get the PWM chip for backlight control */
panel->backlight.pwm = pwm_get(dev->dev, "pwm_backlight");
/* Get the right PWM chip for DSI backlight according to VBT */
if (dev_priv->vbt.dsi.config->pwm_blc == PPS_BLC_PMIC) {
panel->backlight.pwm = pwm_get(dev->dev, "pwm_pmic_backlight");
desc = "PMIC";
} else {
panel->backlight.pwm = pwm_get(dev->dev, "pwm_soc_backlight");
desc = "SoC";
}
if (IS_ERR(panel->backlight.pwm)) {
DRM_ERROR("Failed to own the pwm chip\n");
DRM_ERROR("Failed to get the %s PWM chip\n", desc);
panel->backlight.pwm = NULL;
return -ENODEV;
}
@ -1873,6 +1882,7 @@ static int pwm_setup_backlight(struct intel_connector *connector,
CRC_PMIC_PWM_PERIOD_NS);
panel->backlight.enabled = panel->backlight.level != 0;
DRM_INFO("Using %s PWM for LCD backlight control\n", desc);
return 0;
}

Просмотреть файл

@ -309,13 +309,13 @@ retry:
goto put_state;
}
pipe_config->base.mode_changed = pipe_config->has_psr;
pipe_config->uapi.mode_changed = pipe_config->has_psr;
pipe_config->crc_enabled = enable;
if (IS_HASWELL(dev_priv) &&
pipe_config->base.active && crtc->pipe == PIPE_A &&
pipe_config->hw.active && crtc->pipe == PIPE_A &&
pipe_config->cpu_transcoder == TRANSCODER_EDP)
pipe_config->base.mode_changed = true;
pipe_config->uapi.mode_changed = true;
ret = drm_atomic_commit(state);

Просмотреть файл

@ -26,6 +26,7 @@
#include "display/intel_dp.h"
#include "i915_drv.h"
#include "intel_atomic.h"
#include "intel_display_types.h"
#include "intel_psr.h"
#include "intel_sprite.h"
@ -401,7 +402,9 @@ static void intel_psr_enable_sink(struct intel_dp *intel_dp)
/* Enable ALPM at sink for psr2 */
if (dev_priv->psr.psr2_enabled) {
drm_dp_dpcd_writeb(&intel_dp->aux, DP_RECEIVER_ALPM_CONFIG,
DP_ALPM_ENABLE);
DP_ALPM_ENABLE |
DP_ALPM_LOCK_ERROR_IRQ_HPD_ENABLE);
dpcd_val |= DP_PSR_ENABLE_PSR2 | DP_PSR_IRQ_HPD_WITH_CRC_ERRORS;
} else {
if (dev_priv->psr.link_standby)
@ -536,11 +539,11 @@ transcoder_has_psr2(struct drm_i915_private *dev_priv, enum transcoder trans)
static u32 intel_get_frame_time_us(const struct intel_crtc_state *cstate)
{
if (!cstate || !cstate->base.active)
if (!cstate || !cstate->hw.active)
return 0;
return DIV_ROUND_UP(1000 * 1000,
drm_mode_vrefresh(&cstate->base.adjusted_mode));
drm_mode_vrefresh(&cstate->hw.adjusted_mode));
}
static void psr2_program_idle_frames(struct drm_i915_private *dev_priv,
@ -605,9 +608,9 @@ static bool intel_psr2_config_valid(struct intel_dp *intel_dp,
struct intel_crtc_state *crtc_state)
{
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
int crtc_hdisplay = crtc_state->base.adjusted_mode.crtc_hdisplay;
int crtc_vdisplay = crtc_state->base.adjusted_mode.crtc_vdisplay;
int psr_max_h = 0, psr_max_v = 0;
int crtc_hdisplay = crtc_state->hw.adjusted_mode.crtc_hdisplay;
int crtc_vdisplay = crtc_state->hw.adjusted_mode.crtc_vdisplay;
int psr_max_h = 0, psr_max_v = 0, max_bpp = 0;
if (!dev_priv->psr.sink_psr2_support)
return false;
@ -631,12 +634,15 @@ static bool intel_psr2_config_valid(struct intel_dp *intel_dp,
if (INTEL_GEN(dev_priv) >= 12) {
psr_max_h = 5120;
psr_max_v = 3200;
max_bpp = 30;
} else if (INTEL_GEN(dev_priv) >= 10 || IS_GEMINILAKE(dev_priv)) {
psr_max_h = 4096;
psr_max_v = 2304;
max_bpp = 24;
} else if (IS_GEN(dev_priv, 9)) {
psr_max_h = 3640;
psr_max_v = 2304;
max_bpp = 24;
}
if (crtc_hdisplay > psr_max_h || crtc_vdisplay > psr_max_v) {
@ -646,6 +652,12 @@ static bool intel_psr2_config_valid(struct intel_dp *intel_dp,
return false;
}
if (crtc_state->pipe_bpp > max_bpp) {
DRM_DEBUG_KMS("PSR2 not enabled, pipe bpp %d > max supported %d\n",
crtc_state->pipe_bpp, max_bpp);
return false;
}
/*
* HW sends SU blocks of size four scan lines, which means the starting
* X coordinate and Y granularity requirements will always be met. We
@ -672,7 +684,7 @@ void intel_psr_compute_config(struct intel_dp *intel_dp,
struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
const struct drm_display_mode *adjusted_mode =
&crtc_state->base.adjusted_mode;
&crtc_state->hw.adjusted_mode;
int psr_setup_time;
if (!CAN_PSR(dev_priv))
@ -792,7 +804,7 @@ static void intel_psr_enable_locked(struct drm_i915_private *dev_priv,
dev_priv->psr.psr2_enabled = intel_psr2_enabled(dev_priv, crtc_state);
dev_priv->psr.busy_frontbuffer_bits = 0;
dev_priv->psr.pipe = to_intel_crtc(crtc_state->base.crtc)->pipe;
dev_priv->psr.pipe = to_intel_crtc(crtc_state->uapi.crtc)->pipe;
dev_priv->psr.dc3co_enabled = !!crtc_state->dc3co_exitline;
dev_priv->psr.dc3co_exit_delay = intel_get_frame_time_us(crtc_state);
dev_priv->psr.transcoder = crtc_state->cpu_transcoder;
@ -924,6 +936,9 @@ static void intel_psr_disable_locked(struct intel_dp *intel_dp)
/* Disable PSR on Sink */
drm_dp_dpcd_writeb(&intel_dp->aux, DP_PSR_EN_CFG, 0);
if (dev_priv->psr.psr2_enabled)
drm_dp_dpcd_writeb(&intel_dp->aux, DP_RECEIVER_ALPM_CONFIG, 0);
dev_priv->psr.enabled = false;
}
@ -1039,7 +1054,7 @@ unlock:
int intel_psr_wait_for_idle(const struct intel_crtc_state *new_crtc_state,
u32 *out_value)
{
struct intel_crtc *crtc = to_intel_crtc(new_crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(new_crtc_state->uapi.crtc);
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
if (!dev_priv->psr.enabled || !new_crtc_state->has_psr)
@ -1096,7 +1111,7 @@ static int intel_psr_fastset_force(struct drm_i915_private *dev_priv)
struct drm_device *dev = &dev_priv->drm;
struct drm_modeset_acquire_ctx ctx;
struct drm_atomic_state *state;
struct drm_crtc *crtc;
struct intel_crtc *crtc;
int err;
state = drm_atomic_state_alloc(dev);
@ -1107,21 +1122,18 @@ static int intel_psr_fastset_force(struct drm_i915_private *dev_priv)
state->acquire_ctx = &ctx;
retry:
drm_for_each_crtc(crtc, dev) {
struct drm_crtc_state *crtc_state;
struct intel_crtc_state *intel_crtc_state;
for_each_intel_crtc(dev, crtc) {
struct intel_crtc_state *crtc_state =
intel_atomic_get_crtc_state(state, crtc);
crtc_state = drm_atomic_get_crtc_state(state, crtc);
if (IS_ERR(crtc_state)) {
err = PTR_ERR(crtc_state);
goto error;
}
intel_crtc_state = to_intel_crtc_state(crtc_state);
if (crtc_state->active && intel_crtc_state->has_psr) {
if (crtc_state->hw.active && crtc_state->has_psr) {
/* Mark mode as changed to trigger a pipe->update() */
crtc_state->mode_changed = true;
crtc_state->uapi.mode_changed = true;
break;
}
}
@ -1379,11 +1391,80 @@ void intel_psr_init(struct drm_i915_private *dev_priv)
mutex_init(&dev_priv->psr.lock);
}
void intel_psr_short_pulse(struct intel_dp *intel_dp)
static int psr_get_status_and_error_status(struct intel_dp *intel_dp,
u8 *status, u8 *error_status)
{
struct drm_dp_aux *aux = &intel_dp->aux;
int ret;
ret = drm_dp_dpcd_readb(aux, DP_PSR_STATUS, status);
if (ret != 1)
return ret;
ret = drm_dp_dpcd_readb(aux, DP_PSR_ERROR_STATUS, error_status);
if (ret != 1)
return ret;
*status = *status & DP_PSR_SINK_STATE_MASK;
return 0;
}
static void psr_alpm_check(struct intel_dp *intel_dp)
{
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
struct drm_dp_aux *aux = &intel_dp->aux;
struct i915_psr *psr = &dev_priv->psr;
u8 val;
int r;
if (!psr->psr2_enabled)
return;
r = drm_dp_dpcd_readb(aux, DP_RECEIVER_ALPM_STATUS, &val);
if (r != 1) {
DRM_ERROR("Error reading ALPM status\n");
return;
}
if (val & DP_ALPM_LOCK_TIMEOUT_ERROR) {
intel_psr_disable_locked(intel_dp);
psr->sink_not_reliable = true;
DRM_DEBUG_KMS("ALPM lock timeout error, disabling PSR\n");
/* Clearing error */
drm_dp_dpcd_writeb(aux, DP_RECEIVER_ALPM_STATUS, val);
}
}
static void psr_capability_changed_check(struct intel_dp *intel_dp)
{
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
struct i915_psr *psr = &dev_priv->psr;
u8 val;
int r;
r = drm_dp_dpcd_readb(&intel_dp->aux, DP_PSR_ESI, &val);
if (r != 1) {
DRM_ERROR("Error reading DP_PSR_ESI\n");
return;
}
if (val & DP_PSR_CAPS_CHANGE) {
intel_psr_disable_locked(intel_dp);
psr->sink_not_reliable = true;
DRM_DEBUG_KMS("Sink PSR capability changed, disabling PSR\n");
/* Clearing it */
drm_dp_dpcd_writeb(&intel_dp->aux, DP_PSR_ESI, val);
}
}
void intel_psr_short_pulse(struct intel_dp *intel_dp)
{
struct drm_i915_private *dev_priv = dp_to_i915(intel_dp);
struct i915_psr *psr = &dev_priv->psr;
u8 status, error_status;
const u8 errors = DP_PSR_RFB_STORAGE_ERROR |
DP_PSR_VSC_SDP_UNCORRECTABLE_ERROR |
DP_PSR_LINK_CRC_ERROR;
@ -1396,38 +1477,34 @@ void intel_psr_short_pulse(struct intel_dp *intel_dp)
if (!psr->enabled || psr->dp != intel_dp)
goto exit;
if (drm_dp_dpcd_readb(&intel_dp->aux, DP_PSR_STATUS, &val) != 1) {
DRM_ERROR("PSR_STATUS dpcd read failed\n");
if (psr_get_status_and_error_status(intel_dp, &status, &error_status)) {
DRM_ERROR("Error reading PSR status or error status\n");
goto exit;
}
if ((val & DP_PSR_SINK_STATE_MASK) == DP_PSR_SINK_INTERNAL_ERROR) {
DRM_DEBUG_KMS("PSR sink internal error, disabling PSR\n");
if (status == DP_PSR_SINK_INTERNAL_ERROR || (error_status & errors)) {
intel_psr_disable_locked(intel_dp);
psr->sink_not_reliable = true;
}
if (drm_dp_dpcd_readb(&intel_dp->aux, DP_PSR_ERROR_STATUS, &val) != 1) {
DRM_ERROR("PSR_ERROR_STATUS dpcd read failed\n");
goto exit;
}
if (val & DP_PSR_RFB_STORAGE_ERROR)
if (status == DP_PSR_SINK_INTERNAL_ERROR && !error_status)
DRM_DEBUG_KMS("PSR sink internal error, disabling PSR\n");
if (error_status & DP_PSR_RFB_STORAGE_ERROR)
DRM_DEBUG_KMS("PSR RFB storage error, disabling PSR\n");
if (val & DP_PSR_VSC_SDP_UNCORRECTABLE_ERROR)
if (error_status & DP_PSR_VSC_SDP_UNCORRECTABLE_ERROR)
DRM_DEBUG_KMS("PSR VSC SDP uncorrectable error, disabling PSR\n");
if (val & DP_PSR_LINK_CRC_ERROR)
if (error_status & DP_PSR_LINK_CRC_ERROR)
DRM_DEBUG_KMS("PSR Link CRC error, disabling PSR\n");
if (val & ~errors)
if (error_status & ~errors)
DRM_ERROR("PSR_ERROR_STATUS unhandled errors %x\n",
val & ~errors);
if (val & errors) {
intel_psr_disable_locked(intel_dp);
psr->sink_not_reliable = true;
}
error_status & ~errors);
/* clear status register */
drm_dp_dpcd_writeb(&intel_dp->aux, DP_PSR_ERROR_STATUS, val);
drm_dp_dpcd_writeb(&intel_dp->aux, DP_PSR_ERROR_STATUS, error_status);
psr_alpm_check(intel_dp);
psr_capability_changed_check(intel_dp);
exit:
mutex_unlock(&psr->lock);
}

Просмотреть файл

@ -1087,7 +1087,7 @@ static bool intel_sdvo_compute_avi_infoframe(struct intel_sdvo *intel_sdvo,
{
struct hdmi_avi_infoframe *frame = &crtc_state->infoframes.avi.avi;
const struct drm_display_mode *adjusted_mode =
&crtc_state->base.adjusted_mode;
&crtc_state->hw.adjusted_mode;
int ret;
if (!crtc_state->has_hdmi_sink)
@ -1276,8 +1276,8 @@ static int intel_sdvo_compute_config(struct intel_encoder *encoder,
to_intel_sdvo_connector_state(conn_state);
struct intel_sdvo_connector *intel_sdvo_connector =
to_intel_sdvo_connector(conn_state->connector);
struct drm_display_mode *adjusted_mode = &pipe_config->base.adjusted_mode;
struct drm_display_mode *mode = &pipe_config->base.mode;
struct drm_display_mode *adjusted_mode = &pipe_config->hw.adjusted_mode;
struct drm_display_mode *mode = &pipe_config->hw.mode;
DRM_DEBUG_KMS("forcing bpc to 8 for SDVO\n");
pipe_config->pipe_bpp = 8*3;
@ -1349,9 +1349,9 @@ static int intel_sdvo_compute_config(struct intel_encoder *encoder,
if (IS_TV(intel_sdvo_connector))
i9xx_adjust_sdvo_tv_clock(pipe_config);
/* Set user selected PAR to incoming mode's member */
if (intel_sdvo_connector->is_hdmi)
adjusted_mode->picture_aspect_ratio = conn_state->picture_aspect_ratio;
if (conn_state->picture_aspect_ratio)
adjusted_mode->picture_aspect_ratio =
conn_state->picture_aspect_ratio;
if (!intel_sdvo_compute_avi_infoframe(intel_sdvo,
pipe_config, conn_state)) {
@ -1429,13 +1429,13 @@ static void intel_sdvo_pre_enable(struct intel_encoder *intel_encoder,
const struct drm_connector_state *conn_state)
{
struct drm_i915_private *dev_priv = to_i915(intel_encoder->base.dev);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
const struct drm_display_mode *adjusted_mode = &crtc_state->base.adjusted_mode;
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
const struct drm_display_mode *adjusted_mode = &crtc_state->hw.adjusted_mode;
const struct intel_sdvo_connector_state *sdvo_state =
to_intel_sdvo_connector_state(conn_state);
const struct intel_sdvo_connector *intel_sdvo_connector =
to_intel_sdvo_connector(conn_state->connector);
const struct drm_display_mode *mode = &crtc_state->base.mode;
const struct drm_display_mode *mode = &crtc_state->hw.mode;
struct intel_sdvo *intel_sdvo = to_sdvo(intel_encoder);
u32 sdvox;
struct intel_sdvo_in_out_map in_out;
@ -1629,7 +1629,7 @@ static void intel_sdvo_get_config(struct intel_encoder *encoder,
flags |= DRM_MODE_FLAG_NVSYNC;
}
pipe_config->base.adjusted_mode.flags |= flags;
pipe_config->hw.adjusted_mode.flags |= flags;
/*
* pixel multiplier readout is tricky: Only on i915g/gm it is stored in
@ -1649,7 +1649,7 @@ static void intel_sdvo_get_config(struct intel_encoder *encoder,
if (pipe_config->pixel_multiplier)
dotclock /= pipe_config->pixel_multiplier;
pipe_config->base.adjusted_mode.crtc_clock = dotclock;
pipe_config->hw.adjusted_mode.crtc_clock = dotclock;
/* Cross check the port pixel multiplier with the sdvo encoder state. */
if (intel_sdvo_get_value(intel_sdvo, SDVO_CMD_GET_CLOCK_RATE_MULT,
@ -1701,7 +1701,7 @@ static void intel_sdvo_enable_audio(struct intel_sdvo *intel_sdvo,
const struct drm_connector_state *conn_state)
{
const struct drm_display_mode *adjusted_mode =
&crtc_state->base.adjusted_mode;
&crtc_state->hw.adjusted_mode;
struct drm_connector *connector = conn_state->connector;
u8 *eld = connector->eld;
@ -1723,7 +1723,7 @@ static void intel_disable_sdvo(struct intel_encoder *encoder,
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_sdvo *intel_sdvo = to_sdvo(encoder);
struct intel_crtc *crtc = to_intel_crtc(old_crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(old_crtc_state->uapi.crtc);
u32 temp;
if (old_crtc_state->has_audio)
@ -1785,7 +1785,7 @@ static void intel_enable_sdvo(struct intel_encoder *encoder,
struct drm_device *dev = encoder->base.dev;
struct drm_i915_private *dev_priv = to_i915(dev);
struct intel_sdvo *intel_sdvo = to_sdvo(encoder);
struct intel_crtc *intel_crtc = to_intel_crtc(pipe_config->base.crtc);
struct intel_crtc *intel_crtc = to_intel_crtc(pipe_config->uapi.crtc);
u32 temp;
bool input1, input2;
int i;
@ -2654,7 +2654,6 @@ intel_sdvo_add_hdmi_properties(struct intel_sdvo *intel_sdvo,
intel_attach_broadcast_rgb_property(&connector->base.base);
}
intel_attach_aspect_ratio_property(&connector->base.base);
connector->base.base.state->picture_aspect_ratio = HDMI_PICTURE_ASPECT_NONE;
}
static struct intel_sdvo_connector *intel_sdvo_connector_alloc(void)

Просмотреть файл

@ -81,9 +81,9 @@ int intel_usecs_to_scanlines(const struct drm_display_mode *adjusted_mode,
*/
void intel_pipe_update_start(const struct intel_crtc_state *new_crtc_state)
{
struct intel_crtc *crtc = to_intel_crtc(new_crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(new_crtc_state->uapi.crtc);
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
const struct drm_display_mode *adjusted_mode = &new_crtc_state->base.adjusted_mode;
const struct drm_display_mode *adjusted_mode = &new_crtc_state->hw.adjusted_mode;
long timeout = msecs_to_jiffies_timeout(1);
int scanline, min, max, vblank_start;
wait_queue_head_t *wq = drm_crtc_vblank_waitqueue(&crtc->base);
@ -120,7 +120,7 @@ void intel_pipe_update_start(const struct intel_crtc_state *new_crtc_state)
crtc->debug.min_vbl = min;
crtc->debug.max_vbl = max;
trace_i915_pipe_update_start(crtc);
trace_intel_pipe_update_start(crtc);
for (;;) {
/*
@ -173,7 +173,7 @@ void intel_pipe_update_start(const struct intel_crtc_state *new_crtc_state)
crtc->debug.start_vbl_time = ktime_get();
crtc->debug.start_vbl_count = intel_crtc_get_vblank_counter(crtc);
trace_i915_pipe_update_vblank_evaded(crtc);
trace_intel_pipe_update_vblank_evaded(crtc);
return;
irq_disable:
@ -190,27 +190,28 @@ irq_disable:
*/
void intel_pipe_update_end(struct intel_crtc_state *new_crtc_state)
{
struct intel_crtc *crtc = to_intel_crtc(new_crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(new_crtc_state->uapi.crtc);
enum pipe pipe = crtc->pipe;
int scanline_end = intel_get_crtc_scanline(crtc);
u32 end_vbl_count = intel_crtc_get_vblank_counter(crtc);
ktime_t end_vbl_time = ktime_get();
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
trace_i915_pipe_update_end(crtc, end_vbl_count, scanline_end);
trace_intel_pipe_update_end(crtc, end_vbl_count, scanline_end);
/* We're still in the vblank-evade critical section, this can't race.
* Would be slightly nice to just grab the vblank count and arm the
* event outside of the critical section - the spinlock might spin for a
* while ... */
if (new_crtc_state->base.event) {
if (new_crtc_state->uapi.event) {
WARN_ON(drm_crtc_vblank_get(&crtc->base) != 0);
spin_lock(&crtc->base.dev->event_lock);
drm_crtc_arm_vblank_event(&crtc->base, new_crtc_state->base.event);
drm_crtc_arm_vblank_event(&crtc->base,
new_crtc_state->uapi.event);
spin_unlock(&crtc->base.dev->event_lock);
new_crtc_state->base.event = NULL;
new_crtc_state->uapi.event = NULL;
}
local_irq_enable();
@ -239,9 +240,9 @@ void intel_pipe_update_end(struct intel_crtc_state *new_crtc_state)
int intel_plane_check_stride(const struct intel_plane_state *plane_state)
{
struct intel_plane *plane = to_intel_plane(plane_state->base.plane);
const struct drm_framebuffer *fb = plane_state->base.fb;
unsigned int rotation = plane_state->base.rotation;
struct intel_plane *plane = to_intel_plane(plane_state->uapi.plane);
const struct drm_framebuffer *fb = plane_state->hw.fb;
unsigned int rotation = plane_state->hw.rotation;
u32 stride, max_stride;
/*
@ -251,7 +252,7 @@ int intel_plane_check_stride(const struct intel_plane_state *plane_state)
* kick in due the plane being invisible.
*/
if (intel_plane_can_remap(plane_state) &&
!plane_state->base.visible)
!plane_state->uapi.visible)
return 0;
/* FIXME other color planes? */
@ -271,10 +272,10 @@ int intel_plane_check_stride(const struct intel_plane_state *plane_state)
int intel_plane_check_src_coordinates(struct intel_plane_state *plane_state)
{
const struct drm_framebuffer *fb = plane_state->base.fb;
struct drm_rect *src = &plane_state->base.src;
const struct drm_framebuffer *fb = plane_state->hw.fb;
struct drm_rect *src = &plane_state->uapi.src;
u32 src_x, src_y, src_w, src_h, hsub, vsub;
bool rotated = drm_rotation_90_or_270(plane_state->base.rotation);
bool rotated = drm_rotation_90_or_270(plane_state->hw.rotation);
/*
* Hardware doesn't handle subpixel coordinates.
@ -327,8 +328,8 @@ skl_plane_ratio(const struct intel_crtc_state *crtc_state,
const struct intel_plane_state *plane_state,
unsigned int *num, unsigned int *den)
{
struct drm_i915_private *dev_priv = to_i915(plane_state->base.plane->dev);
const struct drm_framebuffer *fb = plane_state->base.fb;
struct drm_i915_private *dev_priv = to_i915(plane_state->uapi.plane->dev);
const struct drm_framebuffer *fb = plane_state->hw.fb;
if (fb->format->cpp[0] == 8) {
if (INTEL_GEN(dev_priv) >= 10 || IS_GEMINILAKE(dev_priv)) {
@ -347,7 +348,7 @@ skl_plane_ratio(const struct intel_crtc_state *crtc_state,
static int skl_plane_min_cdclk(const struct intel_crtc_state *crtc_state,
const struct intel_plane_state *plane_state)
{
struct drm_i915_private *dev_priv = to_i915(plane_state->base.plane->dev);
struct drm_i915_private *dev_priv = to_i915(plane_state->uapi.plane->dev);
unsigned int pixel_rate = crtc_state->pixel_rate;
unsigned int src_w, src_h, dst_w, dst_h;
unsigned int num, den;
@ -358,10 +359,10 @@ static int skl_plane_min_cdclk(const struct intel_crtc_state *crtc_state,
if (INTEL_GEN(dev_priv) >= 10 || IS_GEMINILAKE(dev_priv))
den *= 2;
src_w = drm_rect_width(&plane_state->base.src) >> 16;
src_h = drm_rect_height(&plane_state->base.src) >> 16;
dst_w = drm_rect_width(&plane_state->base.dst);
dst_h = drm_rect_height(&plane_state->base.dst);
src_w = drm_rect_width(&plane_state->uapi.src) >> 16;
src_h = drm_rect_height(&plane_state->uapi.src) >> 16;
dst_w = drm_rect_width(&plane_state->uapi.dst);
dst_h = drm_rect_height(&plane_state->uapi.dst);
/* Downscaling limits the maximum pixel rate */
dst_w = min(src_w, dst_w);
@ -395,28 +396,28 @@ skl_program_scaler(struct intel_plane *plane,
const struct intel_plane_state *plane_state)
{
struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
const struct drm_framebuffer *fb = plane_state->base.fb;
const struct drm_framebuffer *fb = plane_state->hw.fb;
enum pipe pipe = plane->pipe;
int scaler_id = plane_state->scaler_id;
const struct intel_scaler *scaler =
&crtc_state->scaler_state.scalers[scaler_id];
int crtc_x = plane_state->base.dst.x1;
int crtc_y = plane_state->base.dst.y1;
u32 crtc_w = drm_rect_width(&plane_state->base.dst);
u32 crtc_h = drm_rect_height(&plane_state->base.dst);
int crtc_x = plane_state->uapi.dst.x1;
int crtc_y = plane_state->uapi.dst.y1;
u32 crtc_w = drm_rect_width(&plane_state->uapi.dst);
u32 crtc_h = drm_rect_height(&plane_state->uapi.dst);
u16 y_hphase, uv_rgb_hphase;
u16 y_vphase, uv_rgb_vphase;
int hscale, vscale;
hscale = drm_rect_calc_hscale(&plane_state->base.src,
&plane_state->base.dst,
hscale = drm_rect_calc_hscale(&plane_state->uapi.src,
&plane_state->uapi.dst,
0, INT_MAX);
vscale = drm_rect_calc_vscale(&plane_state->base.src,
&plane_state->base.dst,
vscale = drm_rect_calc_vscale(&plane_state->uapi.src,
&plane_state->uapi.dst,
0, INT_MAX);
/* TODO: handle sub-pixel coordinates */
if (drm_format_info_is_yuv_semiplanar(fb->format) &&
if (intel_format_info_is_yuv_semiplanar(fb->format, fb->modifier) &&
!icl_is_hdr_plane(dev_priv, plane->id)) {
y_hphase = skl_scaler_calc_phase(1, hscale, false);
y_vphase = skl_scaler_calc_phase(1, vscale, false);
@ -541,10 +542,10 @@ icl_program_input_csc(struct intel_plane *plane,
};
const u16 *csc;
if (plane_state->base.color_range == DRM_COLOR_YCBCR_FULL_RANGE)
csc = input_csc_matrix[plane_state->base.color_encoding];
if (plane_state->hw.color_range == DRM_COLOR_YCBCR_FULL_RANGE)
csc = input_csc_matrix[plane_state->hw.color_encoding];
else
csc = input_csc_matrix_lr[plane_state->base.color_encoding];
csc = input_csc_matrix_lr[plane_state->hw.color_encoding];
I915_WRITE_FW(PLANE_INPUT_CSC_COEFF(pipe, plane_id, 0), ROFF(csc[0]) |
GOFF(csc[1]));
@ -558,7 +559,7 @@ icl_program_input_csc(struct intel_plane *plane,
I915_WRITE_FW(PLANE_INPUT_CSC_PREOFF(pipe, plane_id, 0),
PREOFF_YUV_TO_RGB_HI);
if (plane_state->base.color_range == DRM_COLOR_YCBCR_FULL_RANGE)
if (plane_state->hw.color_range == DRM_COLOR_YCBCR_FULL_RANGE)
I915_WRITE_FW(PLANE_INPUT_CSC_PREOFF(pipe, plane_id, 1), 0);
else
I915_WRITE_FW(PLANE_INPUT_CSC_PREOFF(pipe, plane_id, 1),
@ -574,7 +575,7 @@ static void
skl_program_plane(struct intel_plane *plane,
const struct intel_crtc_state *crtc_state,
const struct intel_plane_state *plane_state,
int color_plane, bool slave, u32 plane_ctl)
int color_plane)
{
struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
enum plane_id plane_id = plane->id;
@ -582,19 +583,20 @@ skl_program_plane(struct intel_plane *plane,
const struct drm_intel_sprite_colorkey *key = &plane_state->ckey;
u32 surf_addr = plane_state->color_plane[color_plane].offset;
u32 stride = skl_plane_stride(plane_state, color_plane);
u32 aux_dist = plane_state->color_plane[1].offset - surf_addr;
u32 aux_stride = skl_plane_stride(plane_state, 1);
int crtc_x = plane_state->base.dst.x1;
int crtc_y = plane_state->base.dst.y1;
int crtc_x = plane_state->uapi.dst.x1;
int crtc_y = plane_state->uapi.dst.y1;
u32 x = plane_state->color_plane[color_plane].x;
u32 y = plane_state->color_plane[color_plane].y;
u32 src_w = drm_rect_width(&plane_state->base.src) >> 16;
u32 src_h = drm_rect_height(&plane_state->base.src) >> 16;
struct intel_plane *linked = plane_state->planar_linked_plane;
const struct drm_framebuffer *fb = plane_state->base.fb;
u8 alpha = plane_state->base.alpha >> 8;
u32 src_w = drm_rect_width(&plane_state->uapi.src) >> 16;
u32 src_h = drm_rect_height(&plane_state->uapi.src) >> 16;
const struct drm_framebuffer *fb = plane_state->hw.fb;
u8 alpha = plane_state->hw.alpha >> 8;
u32 plane_color_ctl = 0;
unsigned long irqflags;
u32 keymsk, keymax;
u32 plane_ctl = plane_state->ctl;
plane_ctl |= skl_plane_ctl_crtc(crtc_state);
@ -623,29 +625,13 @@ skl_program_plane(struct intel_plane *plane,
I915_WRITE_FW(PLANE_STRIDE(pipe, plane_id), stride);
I915_WRITE_FW(PLANE_POS(pipe, plane_id), (crtc_y << 16) | crtc_x);
I915_WRITE_FW(PLANE_SIZE(pipe, plane_id), (src_h << 16) | src_w);
I915_WRITE_FW(PLANE_AUX_DIST(pipe, plane_id),
(plane_state->color_plane[1].offset - surf_addr) | aux_stride);
if (icl_is_hdr_plane(dev_priv, plane_id)) {
u32 cus_ctl = 0;
if (INTEL_GEN(dev_priv) < 12)
aux_dist |= aux_stride;
I915_WRITE_FW(PLANE_AUX_DIST(pipe, plane_id), aux_dist);
if (linked) {
/* Enable and use MPEG-2 chroma siting */
cus_ctl = PLANE_CUS_ENABLE |
PLANE_CUS_HPHASE_0 |
PLANE_CUS_VPHASE_SIGN_NEGATIVE |
PLANE_CUS_VPHASE_0_25;
if (linked->id == PLANE_SPRITE5)
cus_ctl |= PLANE_CUS_PLANE_7;
else if (linked->id == PLANE_SPRITE4)
cus_ctl |= PLANE_CUS_PLANE_6;
else
MISSING_CASE(linked->id);
}
I915_WRITE_FW(PLANE_CUS_CTL(pipe, plane_id), cus_ctl);
}
if (icl_is_hdr_plane(dev_priv, plane_id))
I915_WRITE_FW(PLANE_CUS_CTL(pipe, plane_id), plane_state->cus_ctl);
if (INTEL_GEN(dev_priv) >= 10 || IS_GEMINILAKE(dev_priv))
I915_WRITE_FW(PLANE_COLOR_CTL(pipe, plane_id), plane_color_ctl);
@ -675,7 +661,7 @@ skl_program_plane(struct intel_plane *plane,
I915_WRITE_FW(PLANE_SURF(pipe, plane_id),
intel_plane_ggtt_offset(plane_state) + surf_addr);
if (!slave && plane_state->scaler_id >= 0)
if (plane_state->scaler_id >= 0)
skl_program_scaler(plane, crtc_state, plane_state);
spin_unlock_irqrestore(&dev_priv->uncore.lock, irqflags);
@ -688,24 +674,12 @@ skl_update_plane(struct intel_plane *plane,
{
int color_plane = 0;
if (plane_state->planar_linked_plane) {
/* Program the UV plane */
if (plane_state->planar_linked_plane && !plane_state->planar_slave)
/* Program the UV plane on planar master */
color_plane = 1;
}
skl_program_plane(plane, crtc_state, plane_state,
color_plane, false, plane_state->ctl);
skl_program_plane(plane, crtc_state, plane_state, color_plane);
}
static void
icl_update_slave(struct intel_plane *plane,
const struct intel_crtc_state *crtc_state,
const struct intel_plane_state *plane_state)
{
skl_program_plane(plane, crtc_state, plane_state, 0, true,
plane_state->ctl | PLANE_CTL_YUV420_Y_PLANE);
}
static void
skl_disable_plane(struct intel_plane *plane,
const struct intel_crtc_state *crtc_state)
@ -765,9 +739,9 @@ static void i9xx_plane_linear_gamma(u16 gamma[8])
static void
chv_update_csc(const struct intel_plane_state *plane_state)
{
struct intel_plane *plane = to_intel_plane(plane_state->base.plane);
struct intel_plane *plane = to_intel_plane(plane_state->uapi.plane);
struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
const struct drm_framebuffer *fb = plane_state->base.fb;
const struct drm_framebuffer *fb = plane_state->hw.fb;
enum plane_id plane_id = plane->id;
/*
* |r| | c0 c1 c2 | |cr|
@ -793,7 +767,7 @@ chv_update_csc(const struct intel_plane_state *plane_state)
0, 4096, 7601,
},
};
const s16 *csc = csc_matrix[plane_state->base.color_encoding];
const s16 *csc = csc_matrix[plane_state->hw.color_encoding];
/* Seems RGB data bypasses the CSC always */
if (!fb->format->is_yuv)
@ -824,15 +798,15 @@ chv_update_csc(const struct intel_plane_state *plane_state)
static void
vlv_update_clrc(const struct intel_plane_state *plane_state)
{
struct intel_plane *plane = to_intel_plane(plane_state->base.plane);
struct intel_plane *plane = to_intel_plane(plane_state->uapi.plane);
struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
const struct drm_framebuffer *fb = plane_state->base.fb;
const struct drm_framebuffer *fb = plane_state->hw.fb;
enum pipe pipe = plane->pipe;
enum plane_id plane_id = plane->id;
int contrast, brightness, sh_scale, sh_sin, sh_cos;
if (fb->format->is_yuv &&
plane_state->base.color_range == DRM_COLOR_YCBCR_LIMITED_RANGE) {
plane_state->hw.color_range == DRM_COLOR_YCBCR_LIMITED_RANGE) {
/*
* Expand limited range to full range:
* Contrast is applied first and is used to expand Y range.
@ -866,7 +840,7 @@ vlv_plane_ratio(const struct intel_crtc_state *crtc_state,
unsigned int *num, unsigned int *den)
{
u8 active_planes = crtc_state->active_planes & ~BIT(PLANE_CURSOR);
const struct drm_framebuffer *fb = plane_state->base.fb;
const struct drm_framebuffer *fb = plane_state->hw.fb;
unsigned int cpp = fb->format->cpp[0];
/*
@ -952,8 +926,8 @@ static u32 vlv_sprite_ctl_crtc(const struct intel_crtc_state *crtc_state)
static u32 vlv_sprite_ctl(const struct intel_crtc_state *crtc_state,
const struct intel_plane_state *plane_state)
{
const struct drm_framebuffer *fb = plane_state->base.fb;
unsigned int rotation = plane_state->base.rotation;
const struct drm_framebuffer *fb = plane_state->hw.fb;
unsigned int rotation = plane_state->hw.rotation;
const struct drm_intel_sprite_colorkey *key = &plane_state->ckey;
u32 sprctl;
@ -972,6 +946,9 @@ static u32 vlv_sprite_ctl(const struct intel_crtc_state *crtc_state,
case DRM_FORMAT_VYUY:
sprctl |= SP_FORMAT_YUV422 | SP_YUV_ORDER_VYUY;
break;
case DRM_FORMAT_C8:
sprctl |= SP_FORMAT_8BPP;
break;
case DRM_FORMAT_RGB565:
sprctl |= SP_FORMAT_BGR565;
break;
@ -987,6 +964,12 @@ static u32 vlv_sprite_ctl(const struct intel_crtc_state *crtc_state,
case DRM_FORMAT_ABGR2101010:
sprctl |= SP_FORMAT_RGBA1010102;
break;
case DRM_FORMAT_XRGB2101010:
sprctl |= SP_FORMAT_BGRX1010102;
break;
case DRM_FORMAT_ARGB2101010:
sprctl |= SP_FORMAT_BGRA1010102;
break;
case DRM_FORMAT_XBGR8888:
sprctl |= SP_FORMAT_RGBX8888;
break;
@ -998,7 +981,7 @@ static u32 vlv_sprite_ctl(const struct intel_crtc_state *crtc_state,
return 0;
}
if (plane_state->base.color_encoding == DRM_COLOR_YCBCR_BT709)
if (plane_state->hw.color_encoding == DRM_COLOR_YCBCR_BT709)
sprctl |= SP_YUV_FORMAT_BT709;
if (fb->modifier == I915_FORMAT_MOD_X_TILED)
@ -1018,9 +1001,9 @@ static u32 vlv_sprite_ctl(const struct intel_crtc_state *crtc_state,
static void vlv_update_gamma(const struct intel_plane_state *plane_state)
{
struct intel_plane *plane = to_intel_plane(plane_state->base.plane);
struct intel_plane *plane = to_intel_plane(plane_state->uapi.plane);
struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
const struct drm_framebuffer *fb = plane_state->base.fb;
const struct drm_framebuffer *fb = plane_state->hw.fb;
enum pipe pipe = plane->pipe;
enum plane_id plane_id = plane->id;
u16 gamma[8];
@ -1052,10 +1035,10 @@ vlv_update_plane(struct intel_plane *plane,
u32 sprsurf_offset = plane_state->color_plane[0].offset;
u32 linear_offset;
const struct drm_intel_sprite_colorkey *key = &plane_state->ckey;
int crtc_x = plane_state->base.dst.x1;
int crtc_y = plane_state->base.dst.y1;
u32 crtc_w = drm_rect_width(&plane_state->base.dst);
u32 crtc_h = drm_rect_height(&plane_state->base.dst);
int crtc_x = plane_state->uapi.dst.x1;
int crtc_y = plane_state->uapi.dst.y1;
u32 crtc_w = drm_rect_width(&plane_state->uapi.dst);
u32 crtc_h = drm_rect_height(&plane_state->uapi.dst);
u32 x = plane_state->color_plane[0].x;
u32 y = plane_state->color_plane[0].y;
unsigned long irqflags;
@ -1150,7 +1133,7 @@ static void ivb_plane_ratio(const struct intel_crtc_state *crtc_state,
unsigned int *num, unsigned int *den)
{
u8 active_planes = crtc_state->active_planes & ~BIT(PLANE_CURSOR);
const struct drm_framebuffer *fb = plane_state->base.fb;
const struct drm_framebuffer *fb = plane_state->hw.fb;
unsigned int cpp = fb->format->cpp[0];
if (hweight8(active_planes) == 2) {
@ -1186,7 +1169,7 @@ static void ivb_plane_ratio_scaling(const struct intel_crtc_state *crtc_state,
const struct intel_plane_state *plane_state,
unsigned int *num, unsigned int *den)
{
const struct drm_framebuffer *fb = plane_state->base.fb;
const struct drm_framebuffer *fb = plane_state->hw.fb;
unsigned int cpp = fb->format->cpp[0];
switch (cpp) {
@ -1244,8 +1227,8 @@ static int ivb_sprite_min_cdclk(const struct intel_crtc_state *crtc_state,
*/
pixel_rate = crtc_state->pixel_rate;
src_w = drm_rect_width(&plane_state->base.src) >> 16;
dst_w = drm_rect_width(&plane_state->base.dst);
src_w = drm_rect_width(&plane_state->uapi.src) >> 16;
dst_w = drm_rect_width(&plane_state->uapi.dst);
if (src_w != dst_w)
ivb_plane_ratio_scaling(crtc_state, plane_state, &num, &den);
@ -1264,7 +1247,7 @@ static void hsw_plane_ratio(const struct intel_crtc_state *crtc_state,
unsigned int *num, unsigned int *den)
{
u8 active_planes = crtc_state->active_planes & ~BIT(PLANE_CURSOR);
const struct drm_framebuffer *fb = plane_state->base.fb;
const struct drm_framebuffer *fb = plane_state->hw.fb;
unsigned int cpp = fb->format->cpp[0];
if (hweight8(active_planes) == 2) {
@ -1319,8 +1302,8 @@ static u32 ivb_sprite_ctl_crtc(const struct intel_crtc_state *crtc_state)
static bool ivb_need_sprite_gamma(const struct intel_plane_state *plane_state)
{
struct drm_i915_private *dev_priv =
to_i915(plane_state->base.plane->dev);
const struct drm_framebuffer *fb = plane_state->base.fb;
to_i915(plane_state->uapi.plane->dev);
const struct drm_framebuffer *fb = plane_state->hw.fb;
return fb->format->cpp[0] == 8 &&
(IS_IVYBRIDGE(dev_priv) || IS_HASWELL(dev_priv));
@ -1330,9 +1313,9 @@ static u32 ivb_sprite_ctl(const struct intel_crtc_state *crtc_state,
const struct intel_plane_state *plane_state)
{
struct drm_i915_private *dev_priv =
to_i915(plane_state->base.plane->dev);
const struct drm_framebuffer *fb = plane_state->base.fb;
unsigned int rotation = plane_state->base.rotation;
to_i915(plane_state->uapi.plane->dev);
const struct drm_framebuffer *fb = plane_state->hw.fb;
unsigned int rotation = plane_state->hw.rotation;
const struct drm_intel_sprite_colorkey *key = &plane_state->ckey;
u32 sprctl;
@ -1348,6 +1331,12 @@ static u32 ivb_sprite_ctl(const struct intel_crtc_state *crtc_state,
case DRM_FORMAT_XRGB8888:
sprctl |= SPRITE_FORMAT_RGBX888;
break;
case DRM_FORMAT_XBGR2101010:
sprctl |= SPRITE_FORMAT_RGBX101010 | SPRITE_RGB_ORDER_RGBX;
break;
case DRM_FORMAT_XRGB2101010:
sprctl |= SPRITE_FORMAT_RGBX101010;
break;
case DRM_FORMAT_XBGR16161616F:
sprctl |= SPRITE_FORMAT_RGBX161616 | SPRITE_RGB_ORDER_RGBX;
break;
@ -1374,10 +1363,10 @@ static u32 ivb_sprite_ctl(const struct intel_crtc_state *crtc_state,
if (!ivb_need_sprite_gamma(plane_state))
sprctl |= SPRITE_INT_GAMMA_DISABLE;
if (plane_state->base.color_encoding == DRM_COLOR_YCBCR_BT709)
if (plane_state->hw.color_encoding == DRM_COLOR_YCBCR_BT709)
sprctl |= SPRITE_YUV_TO_RGB_CSC_FORMAT_BT709;
if (plane_state->base.color_range == DRM_COLOR_YCBCR_FULL_RANGE)
if (plane_state->hw.color_range == DRM_COLOR_YCBCR_FULL_RANGE)
sprctl |= SPRITE_YUV_RANGE_CORRECTION_DISABLE;
if (fb->modifier == I915_FORMAT_MOD_X_TILED)
@ -1421,7 +1410,7 @@ static void ivb_sprite_linear_gamma(const struct intel_plane_state *plane_state,
static void ivb_update_gamma(const struct intel_plane_state *plane_state)
{
struct intel_plane *plane = to_intel_plane(plane_state->base.plane);
struct intel_plane *plane = to_intel_plane(plane_state->uapi.plane);
struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
enum pipe pipe = plane->pipe;
u16 gamma[18];
@ -1460,14 +1449,14 @@ ivb_update_plane(struct intel_plane *plane,
u32 sprsurf_offset = plane_state->color_plane[0].offset;
u32 linear_offset;
const struct drm_intel_sprite_colorkey *key = &plane_state->ckey;
int crtc_x = plane_state->base.dst.x1;
int crtc_y = plane_state->base.dst.y1;
u32 crtc_w = drm_rect_width(&plane_state->base.dst);
u32 crtc_h = drm_rect_height(&plane_state->base.dst);
int crtc_x = plane_state->uapi.dst.x1;
int crtc_y = plane_state->uapi.dst.y1;
u32 crtc_w = drm_rect_width(&plane_state->uapi.dst);
u32 crtc_h = drm_rect_height(&plane_state->uapi.dst);
u32 x = plane_state->color_plane[0].x;
u32 y = plane_state->color_plane[0].y;
u32 src_w = drm_rect_width(&plane_state->base.src) >> 16;
u32 src_h = drm_rect_height(&plane_state->base.src) >> 16;
u32 src_w = drm_rect_width(&plane_state->uapi.src) >> 16;
u32 src_h = drm_rect_height(&plane_state->uapi.src) >> 16;
u32 sprctl, sprscale = 0;
unsigned long irqflags;
@ -1566,7 +1555,7 @@ ivb_plane_get_hw_state(struct intel_plane *plane,
static int g4x_sprite_min_cdclk(const struct intel_crtc_state *crtc_state,
const struct intel_plane_state *plane_state)
{
const struct drm_framebuffer *fb = plane_state->base.fb;
const struct drm_framebuffer *fb = plane_state->hw.fb;
unsigned int hscale, pixel_rate;
unsigned int limit, decimate;
@ -1580,8 +1569,8 @@ static int g4x_sprite_min_cdclk(const struct intel_crtc_state *crtc_state,
pixel_rate = crtc_state->pixel_rate;
/* Horizontal downscaling limits the maximum pixel rate */
hscale = drm_rect_calc_hscale(&plane_state->base.src,
&plane_state->base.dst,
hscale = drm_rect_calc_hscale(&plane_state->uapi.src,
&plane_state->uapi.dst,
0, INT_MAX);
if (hscale < 0x10000)
return pixel_rate;
@ -1635,9 +1624,9 @@ static u32 g4x_sprite_ctl(const struct intel_crtc_state *crtc_state,
const struct intel_plane_state *plane_state)
{
struct drm_i915_private *dev_priv =
to_i915(plane_state->base.plane->dev);
const struct drm_framebuffer *fb = plane_state->base.fb;
unsigned int rotation = plane_state->base.rotation;
to_i915(plane_state->uapi.plane->dev);
const struct drm_framebuffer *fb = plane_state->hw.fb;
unsigned int rotation = plane_state->hw.rotation;
const struct drm_intel_sprite_colorkey *key = &plane_state->ckey;
u32 dvscntr;
@ -1653,6 +1642,12 @@ static u32 g4x_sprite_ctl(const struct intel_crtc_state *crtc_state,
case DRM_FORMAT_XRGB8888:
dvscntr |= DVS_FORMAT_RGBX888;
break;
case DRM_FORMAT_XBGR2101010:
dvscntr |= DVS_FORMAT_RGBX101010 | DVS_RGB_ORDER_XBGR;
break;
case DRM_FORMAT_XRGB2101010:
dvscntr |= DVS_FORMAT_RGBX101010;
break;
case DRM_FORMAT_XBGR16161616F:
dvscntr |= DVS_FORMAT_RGBX161616 | DVS_RGB_ORDER_XBGR;
break;
@ -1676,10 +1671,10 @@ static u32 g4x_sprite_ctl(const struct intel_crtc_state *crtc_state,
return 0;
}
if (plane_state->base.color_encoding == DRM_COLOR_YCBCR_BT709)
if (plane_state->hw.color_encoding == DRM_COLOR_YCBCR_BT709)
dvscntr |= DVS_YUV_FORMAT_BT709;
if (plane_state->base.color_range == DRM_COLOR_YCBCR_FULL_RANGE)
if (plane_state->hw.color_range == DRM_COLOR_YCBCR_FULL_RANGE)
dvscntr |= DVS_YUV_RANGE_CORRECTION_DISABLE;
if (fb->modifier == I915_FORMAT_MOD_X_TILED)
@ -1698,9 +1693,9 @@ static u32 g4x_sprite_ctl(const struct intel_crtc_state *crtc_state,
static void g4x_update_gamma(const struct intel_plane_state *plane_state)
{
struct intel_plane *plane = to_intel_plane(plane_state->base.plane);
struct intel_plane *plane = to_intel_plane(plane_state->uapi.plane);
struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
const struct drm_framebuffer *fb = plane_state->base.fb;
const struct drm_framebuffer *fb = plane_state->hw.fb;
enum pipe pipe = plane->pipe;
u16 gamma[8];
int i;
@ -1730,9 +1725,9 @@ static void ilk_sprite_linear_gamma(u16 gamma[17])
static void ilk_update_gamma(const struct intel_plane_state *plane_state)
{
struct intel_plane *plane = to_intel_plane(plane_state->base.plane);
struct intel_plane *plane = to_intel_plane(plane_state->uapi.plane);
struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
const struct drm_framebuffer *fb = plane_state->base.fb;
const struct drm_framebuffer *fb = plane_state->hw.fb;
enum pipe pipe = plane->pipe;
u16 gamma[17];
int i;
@ -1766,14 +1761,14 @@ g4x_update_plane(struct intel_plane *plane,
u32 dvssurf_offset = plane_state->color_plane[0].offset;
u32 linear_offset;
const struct drm_intel_sprite_colorkey *key = &plane_state->ckey;
int crtc_x = plane_state->base.dst.x1;
int crtc_y = plane_state->base.dst.y1;
u32 crtc_w = drm_rect_width(&plane_state->base.dst);
u32 crtc_h = drm_rect_height(&plane_state->base.dst);
int crtc_x = plane_state->uapi.dst.x1;
int crtc_y = plane_state->uapi.dst.y1;
u32 crtc_w = drm_rect_width(&plane_state->uapi.dst);
u32 crtc_h = drm_rect_height(&plane_state->uapi.dst);
u32 x = plane_state->color_plane[0].x;
u32 y = plane_state->color_plane[0].y;
u32 src_w = drm_rect_width(&plane_state->base.src) >> 16;
u32 src_h = drm_rect_height(&plane_state->base.src) >> 16;
u32 src_w = drm_rect_width(&plane_state->uapi.src) >> 16;
u32 src_h = drm_rect_height(&plane_state->uapi.src) >> 16;
u32 dvscntr, dvsscale = 0;
unsigned long irqflags;
@ -1886,12 +1881,12 @@ static int
g4x_sprite_check_scaling(struct intel_crtc_state *crtc_state,
struct intel_plane_state *plane_state)
{
const struct drm_framebuffer *fb = plane_state->base.fb;
const struct drm_rect *src = &plane_state->base.src;
const struct drm_rect *dst = &plane_state->base.dst;
const struct drm_framebuffer *fb = plane_state->hw.fb;
const struct drm_rect *src = &plane_state->uapi.src;
const struct drm_rect *dst = &plane_state->uapi.dst;
int src_x, src_w, src_h, crtc_w, crtc_h;
const struct drm_display_mode *adjusted_mode =
&crtc_state->base.adjusted_mode;
&crtc_state->hw.adjusted_mode;
unsigned int stride = plane_state->color_plane[0].stride;
unsigned int cpp = fb->format->cpp[0];
unsigned int width_bytes;
@ -1947,13 +1942,13 @@ static int
g4x_sprite_check(struct intel_crtc_state *crtc_state,
struct intel_plane_state *plane_state)
{
struct intel_plane *plane = to_intel_plane(plane_state->base.plane);
struct intel_plane *plane = to_intel_plane(plane_state->uapi.plane);
struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
int min_scale = DRM_PLANE_HELPER_NO_SCALING;
int max_scale = DRM_PLANE_HELPER_NO_SCALING;
int ret;
if (intel_fb_scalable(plane_state->base.fb)) {
if (intel_fb_scalable(plane_state->hw.fb)) {
if (INTEL_GEN(dev_priv) < 7) {
min_scale = 1;
max_scale = 16 << 16;
@ -1963,8 +1958,8 @@ g4x_sprite_check(struct intel_crtc_state *crtc_state,
}
}
ret = drm_atomic_helper_check_plane_state(&plane_state->base,
&crtc_state->base,
ret = drm_atomic_helper_check_plane_state(&plane_state->uapi,
&crtc_state->uapi,
min_scale, max_scale,
true, true);
if (ret)
@ -1974,7 +1969,7 @@ g4x_sprite_check(struct intel_crtc_state *crtc_state,
if (ret)
return ret;
if (!plane_state->base.visible)
if (!plane_state->uapi.visible)
return 0;
ret = intel_plane_check_src_coordinates(plane_state);
@ -1995,9 +1990,9 @@ g4x_sprite_check(struct intel_crtc_state *crtc_state,
int chv_plane_check_rotation(const struct intel_plane_state *plane_state)
{
struct intel_plane *plane = to_intel_plane(plane_state->base.plane);
struct intel_plane *plane = to_intel_plane(plane_state->uapi.plane);
struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
unsigned int rotation = plane_state->base.rotation;
unsigned int rotation = plane_state->hw.rotation;
/* CHV ignores the mirror bit when the rotate bit is set :( */
if (IS_CHERRYVIEW(dev_priv) &&
@ -2020,8 +2015,8 @@ vlv_sprite_check(struct intel_crtc_state *crtc_state,
if (ret)
return ret;
ret = drm_atomic_helper_check_plane_state(&plane_state->base,
&crtc_state->base,
ret = drm_atomic_helper_check_plane_state(&plane_state->uapi,
&crtc_state->uapi,
DRM_PLANE_HELPER_NO_SCALING,
DRM_PLANE_HELPER_NO_SCALING,
true, true);
@ -2032,7 +2027,7 @@ vlv_sprite_check(struct intel_crtc_state *crtc_state,
if (ret)
return ret;
if (!plane_state->base.visible)
if (!plane_state->uapi.visible)
return 0;
ret = intel_plane_check_src_coordinates(plane_state);
@ -2047,10 +2042,10 @@ vlv_sprite_check(struct intel_crtc_state *crtc_state,
static int skl_plane_check_fb(const struct intel_crtc_state *crtc_state,
const struct intel_plane_state *plane_state)
{
struct intel_plane *plane = to_intel_plane(plane_state->base.plane);
struct intel_plane *plane = to_intel_plane(plane_state->uapi.plane);
struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
const struct drm_framebuffer *fb = plane_state->base.fb;
unsigned int rotation = plane_state->base.rotation;
const struct drm_framebuffer *fb = plane_state->hw.fb;
unsigned int rotation = plane_state->hw.rotation;
struct drm_format_name_buf format_name;
if (!fb)
@ -2105,12 +2100,13 @@ static int skl_plane_check_fb(const struct intel_crtc_state *crtc_state,
}
/* Y-tiling is not supported in IF-ID Interlace mode */
if (crtc_state->base.enable &&
crtc_state->base.adjusted_mode.flags & DRM_MODE_FLAG_INTERLACE &&
if (crtc_state->hw.enable &&
crtc_state->hw.adjusted_mode.flags & DRM_MODE_FLAG_INTERLACE &&
(fb->modifier == I915_FORMAT_MOD_Y_TILED ||
fb->modifier == I915_FORMAT_MOD_Yf_TILED ||
fb->modifier == I915_FORMAT_MOD_Y_TILED_CCS ||
fb->modifier == I915_FORMAT_MOD_Yf_TILED_CCS)) {
fb->modifier == I915_FORMAT_MOD_Yf_TILED_CCS ||
fb->modifier == I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS)) {
DRM_DEBUG_KMS("Y/Yf tiling not supported in IF-ID mode\n");
return -EINVAL;
}
@ -2122,9 +2118,9 @@ static int skl_plane_check_dst_coordinates(const struct intel_crtc_state *crtc_s
const struct intel_plane_state *plane_state)
{
struct drm_i915_private *dev_priv =
to_i915(plane_state->base.plane->dev);
int crtc_x = plane_state->base.dst.x1;
int crtc_w = drm_rect_width(&plane_state->base.dst);
to_i915(plane_state->uapi.plane->dev);
int crtc_x = plane_state->uapi.dst.x1;
int crtc_w = drm_rect_width(&plane_state->uapi.dst);
int pipe_src_w = crtc_state->pipe_src_w;
/*
@ -2150,12 +2146,13 @@ static int skl_plane_check_dst_coordinates(const struct intel_crtc_state *crtc_s
static int skl_plane_check_nv12_rotation(const struct intel_plane_state *plane_state)
{
const struct drm_framebuffer *fb = plane_state->base.fb;
unsigned int rotation = plane_state->base.rotation;
int src_w = drm_rect_width(&plane_state->base.src) >> 16;
const struct drm_framebuffer *fb = plane_state->hw.fb;
unsigned int rotation = plane_state->hw.rotation;
int src_w = drm_rect_width(&plane_state->uapi.src) >> 16;
/* Display WA #1106 */
if (drm_format_info_is_yuv_semiplanar(fb->format) && src_w & 3 &&
if (intel_format_info_is_yuv_semiplanar(fb->format, fb->modifier) &&
src_w & 3 &&
(rotation == DRM_MODE_ROTATE_270 ||
rotation == (DRM_MODE_REFLECT_X | DRM_MODE_ROTATE_90))) {
DRM_DEBUG_KMS("src width must be multiple of 4 for rotated planar YUV\n");
@ -2175,7 +2172,7 @@ static int skl_plane_max_scale(struct drm_i915_private *dev_priv,
* FIXME need to properly check this later.
*/
if (INTEL_GEN(dev_priv) >= 10 || IS_GEMINILAKE(dev_priv) ||
!drm_format_info_is_yuv_semiplanar(fb->format))
!intel_format_info_is_yuv_semiplanar(fb->format, fb->modifier))
return 0x30000 - 1;
else
return 0x20000 - 1;
@ -2184,9 +2181,9 @@ static int skl_plane_max_scale(struct drm_i915_private *dev_priv,
static int skl_plane_check(struct intel_crtc_state *crtc_state,
struct intel_plane_state *plane_state)
{
struct intel_plane *plane = to_intel_plane(plane_state->base.plane);
struct intel_plane *plane = to_intel_plane(plane_state->uapi.plane);
struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
const struct drm_framebuffer *fb = plane_state->base.fb;
const struct drm_framebuffer *fb = plane_state->hw.fb;
int min_scale = DRM_PLANE_HELPER_NO_SCALING;
int max_scale = DRM_PLANE_HELPER_NO_SCALING;
int ret;
@ -2201,8 +2198,8 @@ static int skl_plane_check(struct intel_crtc_state *crtc_state,
max_scale = skl_plane_max_scale(dev_priv, fb);
}
ret = drm_atomic_helper_check_plane_state(&plane_state->base,
&crtc_state->base,
ret = drm_atomic_helper_check_plane_state(&plane_state->uapi,
&crtc_state->uapi,
min_scale, max_scale,
true, true);
if (ret)
@ -2212,7 +2209,7 @@ static int skl_plane_check(struct intel_crtc_state *crtc_state,
if (ret)
return ret;
if (!plane_state->base.visible)
if (!plane_state->uapi.visible)
return 0;
ret = skl_plane_check_dst_coordinates(crtc_state, plane_state);
@ -2228,8 +2225,8 @@ static int skl_plane_check(struct intel_crtc_state *crtc_state,
return ret;
/* HW only has 8 bits pixel precision, disable plane if invisible */
if (!(plane_state->base.alpha >> 8))
plane_state->base.visible = false;
if (!(plane_state->hw.alpha >> 8))
plane_state->uapi.visible = false;
plane_state->ctl = skl_plane_ctl(crtc_state, plane_state);
@ -2237,6 +2234,15 @@ static int skl_plane_check(struct intel_crtc_state *crtc_state,
plane_state->color_ctl = glk_plane_color_ctl(crtc_state,
plane_state);
if (intel_format_info_is_yuv_semiplanar(fb->format, fb->modifier) &&
icl_is_hdr_plane(dev_priv, plane->id))
/* Enable and use MPEG-2 chroma siting */
plane_state->cus_ctl = PLANE_CUS_ENABLE |
PLANE_CUS_HPHASE_0 |
PLANE_CUS_VPHASE_SIGN_NEGATIVE | PLANE_CUS_VPHASE_0_25;
else
plane_state->cus_ctl = 0;
return 0;
}
@ -2248,7 +2254,7 @@ static bool has_dst_key_in_primary_plane(struct drm_i915_private *dev_priv)
static void intel_plane_set_ckey(struct intel_plane_state *plane_state,
const struct drm_intel_sprite_colorkey *set)
{
struct intel_plane *plane = to_intel_plane(plane_state->base.plane);
struct intel_plane *plane = to_intel_plane(plane_state->uapi.plane);
struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
struct drm_intel_sprite_colorkey *key = &plane_state->ckey;
@ -2375,6 +2381,8 @@ static const u64 i9xx_plane_format_modifiers[] = {
static const u32 snb_plane_formats[] = {
DRM_FORMAT_XRGB8888,
DRM_FORMAT_XBGR8888,
DRM_FORMAT_XRGB2101010,
DRM_FORMAT_XBGR2101010,
DRM_FORMAT_XRGB16161616F,
DRM_FORMAT_XBGR16161616F,
DRM_FORMAT_YUYV,
@ -2384,11 +2392,12 @@ static const u32 snb_plane_formats[] = {
};
static const u32 vlv_plane_formats[] = {
DRM_FORMAT_C8,
DRM_FORMAT_RGB565,
DRM_FORMAT_ABGR8888,
DRM_FORMAT_ARGB8888,
DRM_FORMAT_XBGR8888,
DRM_FORMAT_XRGB8888,
DRM_FORMAT_XBGR8888,
DRM_FORMAT_ARGB8888,
DRM_FORMAT_ABGR8888,
DRM_FORMAT_XBGR2101010,
DRM_FORMAT_ABGR2101010,
DRM_FORMAT_YUYV,
@ -2397,6 +2406,23 @@ static const u32 vlv_plane_formats[] = {
DRM_FORMAT_VYUY,
};
static const u32 chv_pipe_b_sprite_formats[] = {
DRM_FORMAT_C8,
DRM_FORMAT_RGB565,
DRM_FORMAT_XRGB8888,
DRM_FORMAT_XBGR8888,
DRM_FORMAT_ARGB8888,
DRM_FORMAT_ABGR8888,
DRM_FORMAT_XRGB2101010,
DRM_FORMAT_XBGR2101010,
DRM_FORMAT_ARGB2101010,
DRM_FORMAT_ABGR2101010,
DRM_FORMAT_YUYV,
DRM_FORMAT_YVYU,
DRM_FORMAT_UYVY,
DRM_FORMAT_VYUY,
};
static const u32 skl_plane_formats[] = {
DRM_FORMAT_C8,
DRM_FORMAT_RGB565,
@ -2462,6 +2488,8 @@ static const u32 icl_sdr_y_plane_formats[] = {
DRM_FORMAT_ABGR8888,
DRM_FORMAT_XRGB2101010,
DRM_FORMAT_XBGR2101010,
DRM_FORMAT_ARGB2101010,
DRM_FORMAT_ABGR2101010,
DRM_FORMAT_YUYV,
DRM_FORMAT_YVYU,
DRM_FORMAT_UYVY,
@ -2483,6 +2511,8 @@ static const u32 icl_sdr_uv_plane_formats[] = {
DRM_FORMAT_ABGR8888,
DRM_FORMAT_XRGB2101010,
DRM_FORMAT_XBGR2101010,
DRM_FORMAT_ARGB2101010,
DRM_FORMAT_ABGR2101010,
DRM_FORMAT_YUYV,
DRM_FORMAT_YVYU,
DRM_FORMAT_UYVY,
@ -2508,6 +2538,8 @@ static const u32 icl_hdr_plane_formats[] = {
DRM_FORMAT_ABGR8888,
DRM_FORMAT_XRGB2101010,
DRM_FORMAT_XBGR2101010,
DRM_FORMAT_ARGB2101010,
DRM_FORMAT_ABGR2101010,
DRM_FORMAT_XRGB16161616F,
DRM_FORMAT_XBGR16161616F,
DRM_FORMAT_ARGB16161616F,
@ -2546,7 +2578,8 @@ static const u64 skl_plane_format_modifiers_ccs[] = {
DRM_FORMAT_MOD_INVALID
};
static const u64 gen12_plane_format_modifiers_noccs[] = {
static const u64 gen12_plane_format_modifiers_ccs[] = {
I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS,
I915_FORMAT_MOD_Y_TILED,
I915_FORMAT_MOD_X_TILED,
DRM_FORMAT_MOD_LINEAR,
@ -2593,6 +2626,8 @@ static bool snb_sprite_format_mod_supported(struct drm_plane *_plane,
switch (format) {
case DRM_FORMAT_XRGB8888:
case DRM_FORMAT_XBGR8888:
case DRM_FORMAT_XRGB2101010:
case DRM_FORMAT_XBGR2101010:
case DRM_FORMAT_XRGB16161616F:
case DRM_FORMAT_XBGR16161616F:
case DRM_FORMAT_YUYV:
@ -2620,6 +2655,7 @@ static bool vlv_sprite_format_mod_supported(struct drm_plane *_plane,
}
switch (format) {
case DRM_FORMAT_C8:
case DRM_FORMAT_RGB565:
case DRM_FORMAT_ABGR8888:
case DRM_FORMAT_ARGB8888:
@ -2627,6 +2663,8 @@ static bool vlv_sprite_format_mod_supported(struct drm_plane *_plane,
case DRM_FORMAT_XRGB8888:
case DRM_FORMAT_XBGR2101010:
case DRM_FORMAT_ABGR2101010:
case DRM_FORMAT_XRGB2101010:
case DRM_FORMAT_ARGB2101010:
case DRM_FORMAT_YUYV:
case DRM_FORMAT_YVYU:
case DRM_FORMAT_UYVY:
@ -2671,6 +2709,8 @@ static bool skl_plane_format_mod_supported(struct drm_plane *_plane,
case DRM_FORMAT_RGB565:
case DRM_FORMAT_XRGB2101010:
case DRM_FORMAT_XBGR2101010:
case DRM_FORMAT_ARGB2101010:
case DRM_FORMAT_ABGR2101010:
case DRM_FORMAT_YUYV:
case DRM_FORMAT_YVYU:
case DRM_FORMAT_UYVY:
@ -2710,6 +2750,7 @@ static bool gen12_plane_format_mod_supported(struct drm_plane *_plane,
case DRM_FORMAT_MOD_LINEAR:
case I915_FORMAT_MOD_X_TILED:
case I915_FORMAT_MOD_Y_TILED:
case I915_FORMAT_MOD_Y_TILED_GEN12_RC_CCS:
break;
default:
return false;
@ -2720,9 +2761,14 @@ static bool gen12_plane_format_mod_supported(struct drm_plane *_plane,
case DRM_FORMAT_XBGR8888:
case DRM_FORMAT_ARGB8888:
case DRM_FORMAT_ABGR8888:
if (is_ccs_modifier(modifier))
return true;
/* fall through */
case DRM_FORMAT_RGB565:
case DRM_FORMAT_XRGB2101010:
case DRM_FORMAT_XBGR2101010:
case DRM_FORMAT_ARGB2101010:
case DRM_FORMAT_ABGR2101010:
case DRM_FORMAT_YUYV:
case DRM_FORMAT_YVYU:
case DRM_FORMAT_UYVY:
@ -2916,8 +2962,6 @@ skl_universal_plane_create(struct drm_i915_private *dev_priv,
plane->get_hw_state = skl_plane_get_hw_state;
plane->check_plane = skl_plane_check;
plane->min_cdclk = skl_plane_min_cdclk;
if (icl_is_nv12_y_plane(plane_id))
plane->update_slave = icl_update_slave;
if (INTEL_GEN(dev_priv) >= 11)
formats = icl_get_plane_formats(dev_priv, pipe,
@ -2929,13 +2973,11 @@ skl_universal_plane_create(struct drm_i915_private *dev_priv,
formats = skl_get_plane_formats(dev_priv, pipe,
plane_id, &num_formats);
plane->has_ccs = skl_plane_has_ccs(dev_priv, pipe, plane_id);
if (INTEL_GEN(dev_priv) >= 12) {
/* TODO: Implement support for gen-12 CCS modifiers */
plane->has_ccs = false;
modifiers = gen12_plane_format_modifiers_noccs;
modifiers = gen12_plane_format_modifiers_ccs;
plane_funcs = &gen12_plane_funcs;
} else {
plane->has_ccs = skl_plane_has_ccs(dev_priv, pipe, plane_id);
if (plane->has_ccs)
modifiers = skl_plane_format_modifiers_ccs;
else
@ -3025,8 +3067,13 @@ intel_sprite_plane_create(struct drm_i915_private *dev_priv,
plane->check_plane = vlv_sprite_check;
plane->min_cdclk = vlv_plane_min_cdclk;
formats = vlv_plane_formats;
num_formats = ARRAY_SIZE(vlv_plane_formats);
if (IS_CHERRYVIEW(dev_priv) && pipe == PIPE_B) {
formats = chv_pipe_b_sprite_formats;
num_formats = ARRAY_SIZE(chv_pipe_b_sprite_formats);
} else {
formats = vlv_plane_formats;
num_formats = ARRAY_SIZE(vlv_plane_formats);
}
modifiers = i9xx_plane_format_modifiers;
plane_funcs = &vlv_sprite_funcs;

Просмотреть файл

@ -924,7 +924,7 @@ intel_enable_tv(struct intel_encoder *encoder,
/* Prevents vblank waits from timing out in intel_tv_detect_type() */
intel_wait_for_vblank(dev_priv,
to_intel_crtc(pipe_config->base.crtc)->pipe);
to_intel_crtc(pipe_config->uapi.crtc)->pipe);
I915_WRITE(TV_CTL, I915_READ(TV_CTL) | TV_ENC_ENABLE);
}
@ -1085,7 +1085,7 @@ intel_tv_get_config(struct intel_encoder *encoder,
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct drm_display_mode *adjusted_mode =
&pipe_config->base.adjusted_mode;
&pipe_config->hw.adjusted_mode;
struct drm_display_mode mode = {};
u32 tv_ctl, hctl1, hctl3, vctl1, vctl2, tmp;
struct tv_mode tv_mode = {};
@ -1188,7 +1188,7 @@ intel_tv_compute_config(struct intel_encoder *encoder,
to_intel_tv_connector_state(conn_state);
const struct tv_mode *tv_mode = intel_tv_mode_find(conn_state);
struct drm_display_mode *adjusted_mode =
&pipe_config->base.adjusted_mode;
&pipe_config->hw.adjusted_mode;
int hdisplay = adjusted_mode->crtc_hdisplay;
int vdisplay = adjusted_mode->crtc_vdisplay;
@ -1417,7 +1417,7 @@ static void intel_tv_pre_enable(struct intel_encoder *encoder,
const struct drm_connector_state *conn_state)
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_crtc *intel_crtc = to_intel_crtc(pipe_config->base.crtc);
struct intel_crtc *intel_crtc = to_intel_crtc(pipe_config->uapi.crtc);
struct intel_tv *intel_tv = enc_to_tv(encoder);
const struct intel_tv_connector_state *tv_conn_state =
to_intel_tv_connector_state(conn_state);

Просмотреть файл

@ -115,6 +115,7 @@ enum bdb_block_id {
BDB_MIPI_CONFIG = 52,
BDB_MIPI_SEQUENCE = 53,
BDB_COMPRESSION_PARAMETERS = 56,
BDB_GENERIC_DTD = 58,
BDB_SKIP = 254, /* VBIOS private block, ignore */
};
@ -368,7 +369,7 @@ struct child_device_config {
u16 dtd_buf_ptr; /* 161 */
u8 edidless_efp:1; /* 161 */
u8 compression_enable:1; /* 198 */
u8 compression_method:1; /* 198 */
u8 compression_method_cps:1; /* 198 */
u8 ganged_edp:1; /* 202 */
u8 reserved0:4;
u8 compression_structure_index:4; /* 198 */
@ -792,6 +793,35 @@ struct bdb_lfp_backlight_data {
struct lfp_backlight_control_method backlight_control[16];
} __packed;
/*
* Block 44 - LFP Power Conservation Features Block
*/
struct als_data_entry {
u16 backlight_adjust;
u16 lux;
} __packed;
struct agressiveness_profile_entry {
u8 dpst_agressiveness : 4;
u8 lace_agressiveness : 4;
} __packed;
struct bdb_lfp_power {
u8 lfp_feature_bits;
struct als_data_entry als[5];
u8 lace_aggressiveness_profile;
u16 dpst;
u16 psr;
u16 drrs;
u16 lace_support;
u16 adt;
u16 dmrrs;
u16 adb;
u16 lace_enabled_status;
struct agressiveness_profile_entry aggressivenes[16];
} __packed;
/*
* Block 52 - MIPI Configuration Block
*/
@ -863,4 +893,34 @@ struct bdb_compression_parameters {
struct dsc_compression_parameters_entry data[16];
} __packed;
/*
* Block 58 - Generic DTD Block
*/
struct generic_dtd_entry {
u32 pixel_clock;
u16 hactive;
u16 hblank;
u16 hfront_porch;
u16 hsync;
u16 vactive;
u16 vblank;
u16 vfront_porch;
u16 vsync;
u16 width_mm;
u16 height_mm;
/* Flags */
u8 rsvd_flags:6;
u8 vsync_positive_polarity:1;
u8 hsync_positive_polarity:1;
u8 rsvd[3];
} __packed;
struct bdb_generic_dtd {
u16 gdtd_size;
struct generic_dtd_entry dtd[]; /* up to 24 DTD's */
} __packed;
#endif /* _INTEL_VBT_DEFS_H_ */

Просмотреть файл

@ -10,6 +10,7 @@
#include "i915_drv.h"
#include "intel_display_types.h"
#include "intel_dsi.h"
#include "intel_vdsc.h"
enum ROW_INDEX_BPP {
@ -30,10 +31,8 @@ enum COLUMN_INDEX_BPC {
MAX_COLUMN_INDEX
};
#define DSC_SUPPORTED_VERSION_MIN 1
/* From DSC_v1.11 spec, rc_parameter_Set syntax element typically constant */
static u16 rc_buf_thresh[] = {
static const u16 rc_buf_thresh[] = {
896, 1792, 2688, 3584, 4480, 5376, 6272, 6720, 7168, 7616,
7744, 7872, 8000, 8064
};
@ -53,7 +52,7 @@ struct rc_parameters {
* Selected Rate Control Related Parameter Recommended Values
* from DSC_v1.11 spec & C Model release: DSC_model_20161212
*/
static struct rc_parameters rc_params[][MAX_COLUMN_INDEX] = {
static const struct rc_parameters rc_parameters[][MAX_COLUMN_INDEX] = {
{
/* 6BPP/8BPC */
{ 768, 15, 6144, 3, 13, 11, 11, {
@ -319,63 +318,84 @@ static int get_column_index_for_rc_params(u8 bits_per_component)
}
}
int intel_dp_compute_dsc_params(struct intel_dp *intel_dp,
struct intel_crtc_state *pipe_config)
static const struct rc_parameters *get_rc_params(u16 compressed_bpp,
u8 bits_per_component)
{
int row_index, column_index;
row_index = get_row_index_for_rc_params(compressed_bpp);
if (row_index < 0)
return NULL;
column_index = get_column_index_for_rc_params(bits_per_component);
if (column_index < 0)
return NULL;
return &rc_parameters[row_index][column_index];
}
bool intel_dsc_source_support(struct intel_encoder *encoder,
const struct intel_crtc_state *crtc_state)
{
const struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
struct drm_i915_private *i915 = to_i915(encoder->base.dev);
enum transcoder cpu_transcoder = crtc_state->cpu_transcoder;
enum pipe pipe = crtc->pipe;
if (!INTEL_INFO(i915)->display.has_dsc)
return false;
/* On TGL, DSC is supported on all Pipes */
if (INTEL_GEN(i915) >= 12)
return true;
if (INTEL_GEN(i915) >= 10 &&
(pipe != PIPE_A ||
(cpu_transcoder == TRANSCODER_EDP ||
cpu_transcoder == TRANSCODER_DSI_0 ||
cpu_transcoder == TRANSCODER_DSI_1)))
return true;
return false;
}
static bool is_pipe_dsc(const struct intel_crtc_state *crtc_state)
{
const struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
const struct drm_i915_private *i915 = to_i915(crtc->base.dev);
enum transcoder cpu_transcoder = crtc_state->cpu_transcoder;
if (INTEL_GEN(i915) >= 12)
return true;
if (cpu_transcoder == TRANSCODER_EDP ||
cpu_transcoder == TRANSCODER_DSI_0 ||
cpu_transcoder == TRANSCODER_DSI_1)
return false;
/* There's no pipe A DSC engine on ICL */
WARN_ON(crtc->pipe == PIPE_A);
return true;
}
int intel_dsc_compute_params(struct intel_encoder *encoder,
struct intel_crtc_state *pipe_config)
{
struct drm_dsc_config *vdsc_cfg = &pipe_config->dsc.config;
u16 compressed_bpp = pipe_config->dsc.compressed_bpp;
const struct rc_parameters *rc_params;
u8 i = 0;
int row_index = 0;
int column_index = 0;
u8 line_buf_depth = 0;
vdsc_cfg->pic_width = pipe_config->base.adjusted_mode.crtc_hdisplay;
vdsc_cfg->pic_height = pipe_config->base.adjusted_mode.crtc_vdisplay;
vdsc_cfg->pic_width = pipe_config->hw.adjusted_mode.crtc_hdisplay;
vdsc_cfg->pic_height = pipe_config->hw.adjusted_mode.crtc_vdisplay;
vdsc_cfg->slice_width = DIV_ROUND_UP(vdsc_cfg->pic_width,
pipe_config->dsc.slice_count);
/*
* Slice Height of 8 works for all currently available panels. So start
* with that if pic_height is an integral multiple of 8.
* Eventually add logic to try multiple slice heights.
*/
if (vdsc_cfg->pic_height % 8 == 0)
vdsc_cfg->slice_height = 8;
else if (vdsc_cfg->pic_height % 4 == 0)
vdsc_cfg->slice_height = 4;
else
vdsc_cfg->slice_height = 2;
/* Values filled from DSC Sink DPCD */
vdsc_cfg->dsc_version_major =
(intel_dp->dsc_dpcd[DP_DSC_REV - DP_DSC_SUPPORT] &
DP_DSC_MAJOR_MASK) >> DP_DSC_MAJOR_SHIFT;
vdsc_cfg->dsc_version_minor =
min(DSC_SUPPORTED_VERSION_MIN,
(intel_dp->dsc_dpcd[DP_DSC_REV - DP_DSC_SUPPORT] &
DP_DSC_MINOR_MASK) >> DP_DSC_MINOR_SHIFT);
vdsc_cfg->convert_rgb = intel_dp->dsc_dpcd[DP_DSC_DEC_COLOR_FORMAT_CAP - DP_DSC_SUPPORT] &
DP_DSC_RGB;
line_buf_depth = drm_dp_dsc_sink_line_buf_depth(intel_dp->dsc_dpcd);
if (!line_buf_depth) {
DRM_DEBUG_KMS("DSC Sink Line Buffer Depth invalid\n");
return -EINVAL;
}
if (vdsc_cfg->dsc_version_minor == 2)
vdsc_cfg->line_buf_depth = (line_buf_depth == DSC_1_2_MAX_LINEBUF_DEPTH_BITS) ?
DSC_1_2_MAX_LINEBUF_DEPTH_VAL : line_buf_depth;
else
vdsc_cfg->line_buf_depth = (line_buf_depth > DSC_1_1_MAX_LINEBUF_DEPTH_BITS) ?
DSC_1_1_MAX_LINEBUF_DEPTH_BITS : line_buf_depth;
/* Gen 11 does not support YCbCr */
vdsc_cfg->simple_422 = false;
/* Gen 11 does not support VBR */
vdsc_cfg->vbr_enable = false;
vdsc_cfg->block_pred_enable =
intel_dp->dsc_dpcd[DP_DSC_BLK_PREDICTION_SUPPORT - DP_DSC_SUPPORT] &
DP_DSC_BLK_PREDICTION_IS_SUPPORTED;
/* Gen 11 only supports integral values of bpp */
vdsc_cfg->bits_per_pixel = compressed_bpp << 4;
@ -399,39 +419,29 @@ int intel_dp_compute_dsc_params(struct intel_dp *intel_dp,
vdsc_cfg->rc_buf_thresh[13] = 0x7D;
}
row_index = get_row_index_for_rc_params(compressed_bpp);
column_index =
get_column_index_for_rc_params(vdsc_cfg->bits_per_component);
if (row_index < 0 || column_index < 0)
rc_params = get_rc_params(compressed_bpp, vdsc_cfg->bits_per_component);
if (!rc_params)
return -EINVAL;
vdsc_cfg->first_line_bpg_offset =
rc_params[row_index][column_index].first_line_bpg_offset;
vdsc_cfg->initial_xmit_delay =
rc_params[row_index][column_index].initial_xmit_delay;
vdsc_cfg->initial_offset =
rc_params[row_index][column_index].initial_offset;
vdsc_cfg->flatness_min_qp =
rc_params[row_index][column_index].flatness_min_qp;
vdsc_cfg->flatness_max_qp =
rc_params[row_index][column_index].flatness_max_qp;
vdsc_cfg->rc_quant_incr_limit0 =
rc_params[row_index][column_index].rc_quant_incr_limit0;
vdsc_cfg->rc_quant_incr_limit1 =
rc_params[row_index][column_index].rc_quant_incr_limit1;
vdsc_cfg->first_line_bpg_offset = rc_params->first_line_bpg_offset;
vdsc_cfg->initial_xmit_delay = rc_params->initial_xmit_delay;
vdsc_cfg->initial_offset = rc_params->initial_offset;
vdsc_cfg->flatness_min_qp = rc_params->flatness_min_qp;
vdsc_cfg->flatness_max_qp = rc_params->flatness_max_qp;
vdsc_cfg->rc_quant_incr_limit0 = rc_params->rc_quant_incr_limit0;
vdsc_cfg->rc_quant_incr_limit1 = rc_params->rc_quant_incr_limit1;
for (i = 0; i < DSC_NUM_BUF_RANGES; i++) {
vdsc_cfg->rc_range_params[i].range_min_qp =
rc_params[row_index][column_index].rc_range_params[i].range_min_qp;
rc_params->rc_range_params[i].range_min_qp;
vdsc_cfg->rc_range_params[i].range_max_qp =
rc_params[row_index][column_index].rc_range_params[i].range_max_qp;
rc_params->rc_range_params[i].range_max_qp;
/*
* Range BPG Offset uses 2's complement and is only a 6 bits. So
* mask it to get only 6 bits.
*/
vdsc_cfg->rc_range_params[i].range_bpg_offset =
rc_params[row_index][column_index].rc_range_params[i].range_bpg_offset &
rc_params->rc_range_params[i].range_bpg_offset &
DSC_RANGE_BPG_OFFSET_MASK;
}
@ -453,41 +463,42 @@ int intel_dp_compute_dsc_params(struct intel_dp *intel_dp,
vdsc_cfg->initial_scale_value = (vdsc_cfg->rc_model_size << 3) /
(vdsc_cfg->rc_model_size - vdsc_cfg->initial_offset);
return drm_dsc_compute_rc_parameters(vdsc_cfg);
return 0;
}
enum intel_display_power_domain
intel_dsc_power_domain(const struct intel_crtc_state *crtc_state)
{
struct drm_i915_private *i915 = to_i915(crtc_state->base.crtc->dev);
enum transcoder cpu_transcoder = crtc_state->cpu_transcoder;
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
struct drm_i915_private *i915 = to_i915(crtc->base.dev);
enum pipe pipe = crtc->pipe;
/*
* On ICL VDSC/joining for eDP transcoder uses a separate power well,
* PW2. This requires POWER_DOMAIN_TRANSCODER_VDSC_PW2 power domain.
* For any other transcoder, VDSC/joining uses the power well associated
* with the pipe/transcoder in use. Hence another reference on the
* transcoder power domain will suffice.
* VDSC/joining uses a separate power well, PW2, and requires
* POWER_DOMAIN_TRANSCODER_VDSC_PW2 power domain in two cases:
*
* On TGL we have the same mapping, but for transcoder A (the special
* TRANSCODER_EDP is gone).
* - ICL eDP/DSI transcoder
* - TGL pipe A
*
* For any other pipe, VDSC/joining uses the power well associated with
* the pipe in use. Hence another reference on the pipe power domain
* will suffice. (Except no VDSC/joining on ICL pipe A.)
*/
if (INTEL_GEN(i915) >= 12 && cpu_transcoder == TRANSCODER_A)
return POWER_DOMAIN_TRANSCODER_VDSC_PW2;
else if (cpu_transcoder == TRANSCODER_EDP)
if (INTEL_GEN(i915) >= 12 && pipe == PIPE_A)
return POWER_DOMAIN_TRANSCODER_VDSC_PW2;
else if (is_pipe_dsc(crtc_state))
return POWER_DOMAIN_PIPE(pipe);
else
return POWER_DOMAIN_TRANSCODER(cpu_transcoder);
return POWER_DOMAIN_TRANSCODER_VDSC_PW2;
}
static void intel_configure_pps_for_dsc_encoder(struct intel_encoder *encoder,
const struct intel_crtc_state *crtc_state)
static void intel_dsc_pps_configure(struct intel_encoder *encoder,
const struct intel_crtc_state *crtc_state)
{
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
const struct drm_dsc_config *vdsc_cfg = &crtc_state->dsc.config;
enum pipe pipe = crtc->pipe;
enum transcoder cpu_transcoder = crtc_state->cpu_transcoder;
u32 pps_val = 0;
u32 rc_buf_thresh_dword[4];
u32 rc_range_params_dword[8];
@ -508,7 +519,7 @@ static void intel_configure_pps_for_dsc_encoder(struct intel_encoder *encoder,
if (vdsc_cfg->vbr_enable)
pps_val |= DSC_VBR_ENABLE;
DRM_INFO("PPS0 = 0x%08x\n", pps_val);
if (cpu_transcoder == TRANSCODER_EDP) {
if (!is_pipe_dsc(crtc_state)) {
I915_WRITE(DSCA_PICTURE_PARAMETER_SET_0, pps_val);
/*
* If 2 VDSC instances are needed, configure PPS for second
@ -527,7 +538,7 @@ static void intel_configure_pps_for_dsc_encoder(struct intel_encoder *encoder,
pps_val = 0;
pps_val |= DSC_BPP(vdsc_cfg->bits_per_pixel);
DRM_INFO("PPS1 = 0x%08x\n", pps_val);
if (cpu_transcoder == TRANSCODER_EDP) {
if (!is_pipe_dsc(crtc_state)) {
I915_WRITE(DSCA_PICTURE_PARAMETER_SET_1, pps_val);
/*
* If 2 VDSC instances are needed, configure PPS for second
@ -547,7 +558,7 @@ static void intel_configure_pps_for_dsc_encoder(struct intel_encoder *encoder,
pps_val |= DSC_PIC_HEIGHT(vdsc_cfg->pic_height) |
DSC_PIC_WIDTH(vdsc_cfg->pic_width / num_vdsc_instances);
DRM_INFO("PPS2 = 0x%08x\n", pps_val);
if (cpu_transcoder == TRANSCODER_EDP) {
if (!is_pipe_dsc(crtc_state)) {
I915_WRITE(DSCA_PICTURE_PARAMETER_SET_2, pps_val);
/*
* If 2 VDSC instances are needed, configure PPS for second
@ -567,7 +578,7 @@ static void intel_configure_pps_for_dsc_encoder(struct intel_encoder *encoder,
pps_val |= DSC_SLICE_HEIGHT(vdsc_cfg->slice_height) |
DSC_SLICE_WIDTH(vdsc_cfg->slice_width);
DRM_INFO("PPS3 = 0x%08x\n", pps_val);
if (cpu_transcoder == TRANSCODER_EDP) {
if (!is_pipe_dsc(crtc_state)) {
I915_WRITE(DSCA_PICTURE_PARAMETER_SET_3, pps_val);
/*
* If 2 VDSC instances are needed, configure PPS for second
@ -587,7 +598,7 @@ static void intel_configure_pps_for_dsc_encoder(struct intel_encoder *encoder,
pps_val |= DSC_INITIAL_XMIT_DELAY(vdsc_cfg->initial_xmit_delay) |
DSC_INITIAL_DEC_DELAY(vdsc_cfg->initial_dec_delay);
DRM_INFO("PPS4 = 0x%08x\n", pps_val);
if (cpu_transcoder == TRANSCODER_EDP) {
if (!is_pipe_dsc(crtc_state)) {
I915_WRITE(DSCA_PICTURE_PARAMETER_SET_4, pps_val);
/*
* If 2 VDSC instances are needed, configure PPS for second
@ -607,7 +618,7 @@ static void intel_configure_pps_for_dsc_encoder(struct intel_encoder *encoder,
pps_val |= DSC_SCALE_INC_INT(vdsc_cfg->scale_increment_interval) |
DSC_SCALE_DEC_INT(vdsc_cfg->scale_decrement_interval);
DRM_INFO("PPS5 = 0x%08x\n", pps_val);
if (cpu_transcoder == TRANSCODER_EDP) {
if (!is_pipe_dsc(crtc_state)) {
I915_WRITE(DSCA_PICTURE_PARAMETER_SET_5, pps_val);
/*
* If 2 VDSC instances are needed, configure PPS for second
@ -629,7 +640,7 @@ static void intel_configure_pps_for_dsc_encoder(struct intel_encoder *encoder,
DSC_FLATNESS_MIN_QP(vdsc_cfg->flatness_min_qp) |
DSC_FLATNESS_MAX_QP(vdsc_cfg->flatness_max_qp);
DRM_INFO("PPS6 = 0x%08x\n", pps_val);
if (cpu_transcoder == TRANSCODER_EDP) {
if (!is_pipe_dsc(crtc_state)) {
I915_WRITE(DSCA_PICTURE_PARAMETER_SET_6, pps_val);
/*
* If 2 VDSC instances are needed, configure PPS for second
@ -649,7 +660,7 @@ static void intel_configure_pps_for_dsc_encoder(struct intel_encoder *encoder,
pps_val |= DSC_SLICE_BPG_OFFSET(vdsc_cfg->slice_bpg_offset) |
DSC_NFL_BPG_OFFSET(vdsc_cfg->nfl_bpg_offset);
DRM_INFO("PPS7 = 0x%08x\n", pps_val);
if (cpu_transcoder == TRANSCODER_EDP) {
if (!is_pipe_dsc(crtc_state)) {
I915_WRITE(DSCA_PICTURE_PARAMETER_SET_7, pps_val);
/*
* If 2 VDSC instances are needed, configure PPS for second
@ -669,7 +680,7 @@ static void intel_configure_pps_for_dsc_encoder(struct intel_encoder *encoder,
pps_val |= DSC_FINAL_OFFSET(vdsc_cfg->final_offset) |
DSC_INITIAL_OFFSET(vdsc_cfg->initial_offset);
DRM_INFO("PPS8 = 0x%08x\n", pps_val);
if (cpu_transcoder == TRANSCODER_EDP) {
if (!is_pipe_dsc(crtc_state)) {
I915_WRITE(DSCA_PICTURE_PARAMETER_SET_8, pps_val);
/*
* If 2 VDSC instances are needed, configure PPS for second
@ -689,7 +700,7 @@ static void intel_configure_pps_for_dsc_encoder(struct intel_encoder *encoder,
pps_val |= DSC_RC_MODEL_SIZE(DSC_RC_MODEL_SIZE_CONST) |
DSC_RC_EDGE_FACTOR(DSC_RC_EDGE_FACTOR_CONST);
DRM_INFO("PPS9 = 0x%08x\n", pps_val);
if (cpu_transcoder == TRANSCODER_EDP) {
if (!is_pipe_dsc(crtc_state)) {
I915_WRITE(DSCA_PICTURE_PARAMETER_SET_9, pps_val);
/*
* If 2 VDSC instances are needed, configure PPS for second
@ -711,7 +722,7 @@ static void intel_configure_pps_for_dsc_encoder(struct intel_encoder *encoder,
DSC_RC_TARGET_OFF_HIGH(DSC_RC_TGT_OFFSET_HI_CONST) |
DSC_RC_TARGET_OFF_LOW(DSC_RC_TGT_OFFSET_LO_CONST);
DRM_INFO("PPS10 = 0x%08x\n", pps_val);
if (cpu_transcoder == TRANSCODER_EDP) {
if (!is_pipe_dsc(crtc_state)) {
I915_WRITE(DSCA_PICTURE_PARAMETER_SET_10, pps_val);
/*
* If 2 VDSC instances are needed, configure PPS for second
@ -734,7 +745,7 @@ static void intel_configure_pps_for_dsc_encoder(struct intel_encoder *encoder,
DSC_SLICE_ROW_PER_FRAME(vdsc_cfg->pic_height /
vdsc_cfg->slice_height);
DRM_INFO("PPS16 = 0x%08x\n", pps_val);
if (cpu_transcoder == TRANSCODER_EDP) {
if (!is_pipe_dsc(crtc_state)) {
I915_WRITE(DSCA_PICTURE_PARAMETER_SET_16, pps_val);
/*
* If 2 VDSC instances are needed, configure PPS for second
@ -758,7 +769,7 @@ static void intel_configure_pps_for_dsc_encoder(struct intel_encoder *encoder,
DRM_INFO(" RC_BUF_THRESH%d = 0x%08x\n", i,
rc_buf_thresh_dword[i / 4]);
}
if (cpu_transcoder == TRANSCODER_EDP) {
if (!is_pipe_dsc(crtc_state)) {
I915_WRITE(DSCA_RC_BUF_THRESH_0, rc_buf_thresh_dword[0]);
I915_WRITE(DSCA_RC_BUF_THRESH_0_UDW, rc_buf_thresh_dword[1]);
I915_WRITE(DSCA_RC_BUF_THRESH_1, rc_buf_thresh_dword[2]);
@ -807,7 +818,7 @@ static void intel_configure_pps_for_dsc_encoder(struct intel_encoder *encoder,
DRM_INFO(" RC_RANGE_PARAM_%d = 0x%08x\n", i,
rc_range_params_dword[i / 2]);
}
if (cpu_transcoder == TRANSCODER_EDP) {
if (!is_pipe_dsc(crtc_state)) {
I915_WRITE(DSCA_RC_RANGE_PARAMETERS_0,
rc_range_params_dword[0]);
I915_WRITE(DSCA_RC_RANGE_PARAMETERS_0_UDW,
@ -880,8 +891,75 @@ static void intel_configure_pps_for_dsc_encoder(struct intel_encoder *encoder,
}
}
static void intel_dp_write_dsc_pps_sdp(struct intel_encoder *encoder,
const struct intel_crtc_state *crtc_state)
void intel_dsc_get_config(struct intel_encoder *encoder,
struct intel_crtc_state *crtc_state)
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct drm_dsc_config *vdsc_cfg = &crtc_state->dsc.config;
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
enum pipe pipe = crtc->pipe;
enum intel_display_power_domain power_domain;
intel_wakeref_t wakeref;
u32 dss_ctl1, dss_ctl2, val;
if (!intel_dsc_source_support(encoder, crtc_state))
return;
power_domain = intel_dsc_power_domain(crtc_state);
wakeref = intel_display_power_get_if_enabled(dev_priv, power_domain);
if (!wakeref)
return;
if (!is_pipe_dsc(crtc_state)) {
dss_ctl1 = I915_READ(DSS_CTL1);
dss_ctl2 = I915_READ(DSS_CTL2);
} else {
dss_ctl1 = I915_READ(ICL_PIPE_DSS_CTL1(pipe));
dss_ctl2 = I915_READ(ICL_PIPE_DSS_CTL2(pipe));
}
crtc_state->dsc.compression_enable = dss_ctl2 & LEFT_BRANCH_VDSC_ENABLE;
if (!crtc_state->dsc.compression_enable)
goto out;
crtc_state->dsc.dsc_split = (dss_ctl2 & RIGHT_BRANCH_VDSC_ENABLE) &&
(dss_ctl1 & JOINER_ENABLE);
/* FIXME: add more state readout as needed */
/* PPS1 */
if (!is_pipe_dsc(crtc_state))
val = I915_READ(DSCA_PICTURE_PARAMETER_SET_1);
else
val = I915_READ(ICL_DSC0_PICTURE_PARAMETER_SET_1(pipe));
vdsc_cfg->bits_per_pixel = val;
crtc_state->dsc.compressed_bpp = vdsc_cfg->bits_per_pixel >> 4;
out:
intel_display_power_put(dev_priv, power_domain, wakeref);
}
static void intel_dsc_dsi_pps_write(struct intel_encoder *encoder,
const struct intel_crtc_state *crtc_state)
{
const struct drm_dsc_config *vdsc_cfg = &crtc_state->dsc.config;
struct intel_dsi *intel_dsi = enc_to_intel_dsi(&encoder->base);
struct mipi_dsi_device *dsi;
struct drm_dsc_picture_parameter_set pps;
enum port port;
drm_dsc_pps_payload_pack(&pps, vdsc_cfg);
for_each_dsi_port(port, intel_dsi->ports) {
dsi = intel_dsi->dsi_hosts[port]->device;
mipi_dsi_picture_parameter_set(dsi, &pps);
mipi_dsi_compression_mode(dsi, true);
}
}
static void intel_dsc_dp_pps_write(struct intel_encoder *encoder,
const struct intel_crtc_state *crtc_state)
{
struct intel_dp *intel_dp = enc_to_intel_dp(&encoder->base);
struct intel_digital_port *intel_dig_port = dp_to_dig_port(intel_dp);
@ -902,7 +980,7 @@ static void intel_dp_write_dsc_pps_sdp(struct intel_encoder *encoder,
void intel_dsc_enable(struct intel_encoder *encoder,
const struct intel_crtc_state *crtc_state)
{
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
enum pipe pipe = crtc->pipe;
i915_reg_t dss_ctl1_reg, dss_ctl2_reg;
@ -916,11 +994,14 @@ void intel_dsc_enable(struct intel_encoder *encoder,
intel_display_power_get(dev_priv,
intel_dsc_power_domain(crtc_state));
intel_configure_pps_for_dsc_encoder(encoder, crtc_state);
intel_dsc_pps_configure(encoder, crtc_state);
intel_dp_write_dsc_pps_sdp(encoder, crtc_state);
if (encoder->type == INTEL_OUTPUT_DSI)
intel_dsc_dsi_pps_write(encoder, crtc_state);
else
intel_dsc_dp_pps_write(encoder, crtc_state);
if (crtc_state->cpu_transcoder == TRANSCODER_EDP) {
if (!is_pipe_dsc(crtc_state)) {
dss_ctl1_reg = DSS_CTL1;
dss_ctl2_reg = DSS_CTL2;
} else {
@ -938,7 +1019,7 @@ void intel_dsc_enable(struct intel_encoder *encoder,
void intel_dsc_disable(const struct intel_crtc_state *old_crtc_state)
{
struct intel_crtc *crtc = to_intel_crtc(old_crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(old_crtc_state->uapi.crtc);
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
enum pipe pipe = crtc->pipe;
i915_reg_t dss_ctl1_reg, dss_ctl2_reg;
@ -947,7 +1028,7 @@ void intel_dsc_disable(const struct intel_crtc_state *old_crtc_state)
if (!old_crtc_state->dsc.compression_enable)
return;
if (old_crtc_state->cpu_transcoder == TRANSCODER_EDP) {
if (!is_pipe_dsc(old_crtc_state)) {
dss_ctl1_reg = DSS_CTL1;
dss_ctl2_reg = DSS_CTL2;
} else {

Просмотреть файл

@ -6,15 +6,20 @@
#ifndef __INTEL_VDSC_H__
#define __INTEL_VDSC_H__
#include <linux/types.h>
struct intel_encoder;
struct intel_crtc_state;
struct intel_dp;
bool intel_dsc_source_support(struct intel_encoder *encoder,
const struct intel_crtc_state *crtc_state);
void intel_dsc_enable(struct intel_encoder *encoder,
const struct intel_crtc_state *crtc_state);
void intel_dsc_disable(const struct intel_crtc_state *crtc_state);
int intel_dp_compute_dsc_params(struct intel_dp *intel_dp,
struct intel_crtc_state *pipe_config);
int intel_dsc_compute_params(struct intel_encoder *encoder,
struct intel_crtc_state *pipe_config);
void intel_dsc_get_config(struct intel_encoder *encoder,
struct intel_crtc_state *crtc_state);
enum intel_display_power_domain
intel_dsc_power_domain(const struct intel_crtc_state *crtc_state);

Просмотреть файл

@ -261,9 +261,9 @@ static int intel_dsi_compute_config(struct intel_encoder *encoder,
struct intel_dsi *intel_dsi = container_of(encoder, struct intel_dsi,
base);
struct intel_connector *intel_connector = intel_dsi->attached_connector;
struct intel_crtc *crtc = to_intel_crtc(pipe_config->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(pipe_config->uapi.crtc);
const struct drm_display_mode *fixed_mode = intel_connector->panel.fixed_mode;
struct drm_display_mode *adjusted_mode = &pipe_config->base.adjusted_mode;
struct drm_display_mode *adjusted_mode = &pipe_config->hw.adjusted_mode;
int ret;
DRM_DEBUG_KMS("\n");
@ -624,7 +624,7 @@ static void intel_dsi_port_enable(struct intel_encoder *encoder,
const struct intel_crtc_state *crtc_state)
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
struct intel_dsi *intel_dsi = enc_to_intel_dsi(&encoder->base);
enum port port;
@ -746,7 +746,7 @@ static void intel_dsi_pre_enable(struct intel_encoder *encoder,
const struct drm_connector_state *conn_state)
{
struct intel_dsi *intel_dsi = enc_to_intel_dsi(&encoder->base);
struct drm_crtc *crtc = pipe_config->base.crtc;
struct drm_crtc *crtc = pipe_config->uapi.crtc;
struct drm_i915_private *dev_priv = to_i915(crtc->dev);
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
enum pipe pipe = intel_crtc->pipe;
@ -882,8 +882,8 @@ static void intel_dsi_clear_device_ready(struct intel_encoder *encoder)
}
static void intel_dsi_post_disable(struct intel_encoder *encoder,
const struct intel_crtc_state *pipe_config,
const struct drm_connector_state *conn_state)
const struct intel_crtc_state *old_crtc_state,
const struct drm_connector_state *old_conn_state)
{
struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
struct intel_dsi *intel_dsi = enc_to_intel_dsi(&encoder->base);
@ -892,6 +892,12 @@ static void intel_dsi_post_disable(struct intel_encoder *encoder,
DRM_DEBUG_KMS("\n");
if (IS_GEN9_LP(dev_priv)) {
intel_crtc_vblank_off(old_crtc_state);
skylake_scaler_disable(old_crtc_state);
}
if (is_vid_mode(intel_dsi)) {
for_each_dsi_port(port, intel_dsi->ports)
vlv_dsi_wait_for_fifo_empty(intel_dsi, port);
@ -1032,9 +1038,9 @@ static void bxt_dsi_get_pipe_config(struct intel_encoder *encoder,
struct drm_device *dev = encoder->base.dev;
struct drm_i915_private *dev_priv = to_i915(dev);
struct drm_display_mode *adjusted_mode =
&pipe_config->base.adjusted_mode;
&pipe_config->hw.adjusted_mode;
struct drm_display_mode *adjusted_mode_sw;
struct intel_crtc *crtc = to_intel_crtc(pipe_config->base.crtc);
struct intel_crtc *crtc = to_intel_crtc(pipe_config->uapi.crtc);
struct intel_dsi *intel_dsi = enc_to_intel_dsi(&encoder->base);
unsigned int lane_count = intel_dsi->lane_count;
unsigned int bpp, fmt;
@ -1045,7 +1051,7 @@ static void bxt_dsi_get_pipe_config(struct intel_encoder *encoder,
crtc_hblank_start_sw, crtc_hblank_end_sw;
/* FIXME: hw readout should not depend on SW state */
adjusted_mode_sw = &crtc->config->base.adjusted_mode;
adjusted_mode_sw = &crtc->config->hw.adjusted_mode;
/*
* Atleast one port is active as encoder->get_config called only if
@ -1204,7 +1210,7 @@ static void intel_dsi_get_config(struct intel_encoder *encoder,
}
if (pclk) {
pipe_config->base.adjusted_mode.crtc_clock = pclk;
pipe_config->hw.adjusted_mode.crtc_clock = pclk;
pipe_config->port_clock = pclk;
}
}
@ -1315,9 +1321,9 @@ static void intel_dsi_prepare(struct intel_encoder *intel_encoder,
struct drm_encoder *encoder = &intel_encoder->base;
struct drm_device *dev = encoder->dev;
struct drm_i915_private *dev_priv = to_i915(dev);
struct intel_crtc *intel_crtc = to_intel_crtc(pipe_config->base.crtc);
struct intel_crtc *intel_crtc = to_intel_crtc(pipe_config->uapi.crtc);
struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder);
const struct drm_display_mode *adjusted_mode = &pipe_config->base.adjusted_mode;
const struct drm_display_mode *adjusted_mode = &pipe_config->hw.adjusted_mode;
enum port port;
unsigned int bpp = mipi_dsi_pixel_format_to_bpp(intel_dsi->pixel_format);
u32 val, tmp;

Просмотреть файл

@ -20,33 +20,31 @@ static void __do_clflush(struct drm_i915_gem_object *obj)
{
GEM_BUG_ON(!i915_gem_object_has_pages(obj));
drm_clflush_sg(obj->mm.pages);
intel_frontbuffer_flush(obj->frontbuffer, ORIGIN_CPU);
i915_gem_object_flush_frontbuffer(obj, ORIGIN_CPU);
}
static int clflush_work(struct dma_fence_work *base)
{
struct clflush *clflush = container_of(base, typeof(*clflush), base);
struct drm_i915_gem_object *obj = fetch_and_zero(&clflush->obj);
struct drm_i915_gem_object *obj = clflush->obj;
int err;
err = i915_gem_object_pin_pages(obj);
if (err)
goto put;
return err;
__do_clflush(obj);
i915_gem_object_unpin_pages(obj);
put:
i915_gem_object_put(obj);
return err;
return 0;
}
static void clflush_release(struct dma_fence_work *base)
{
struct clflush *clflush = container_of(base, typeof(*clflush), base);
if (clflush->obj)
i915_gem_object_put(clflush->obj);
i915_gem_object_put(clflush->obj);
}
static const struct dma_fence_work_ops clflush_ops = {

Просмотреть файл

@ -69,7 +69,9 @@
#include <drm/i915_drm.h>
#include "gt/intel_context.h"
#include "gt/intel_engine_heartbeat.h"
#include "gt/intel_engine_pm.h"
#include "gt/intel_engine_user.h"
#include "gt/intel_lrc_reg.h"
#include "gt/intel_ring.h"
@ -169,12 +171,80 @@ lookup_user_engine(struct i915_gem_context *ctx,
return i915_gem_context_get_engine(ctx, idx);
}
static struct i915_address_space *
context_get_vm_rcu(struct i915_gem_context *ctx)
{
GEM_BUG_ON(!rcu_access_pointer(ctx->vm));
do {
struct i915_address_space *vm;
/*
* We do not allow downgrading from full-ppgtt [to a shared
* global gtt], so ctx->vm cannot become NULL.
*/
vm = rcu_dereference(ctx->vm);
if (!kref_get_unless_zero(&vm->ref))
continue;
/*
* This ppgtt may have be reallocated between
* the read and the kref, and reassigned to a third
* context. In order to avoid inadvertent sharing
* of this ppgtt with that third context (and not
* src), we have to confirm that we have the same
* ppgtt after passing through the strong memory
* barrier implied by a successful
* kref_get_unless_zero().
*
* Once we have acquired the current ppgtt of ctx,
* we no longer care if it is released from ctx, as
* it cannot be reallocated elsewhere.
*/
if (vm == rcu_access_pointer(ctx->vm))
return rcu_pointer_handoff(vm);
i915_vm_put(vm);
} while (1);
}
static void intel_context_set_gem(struct intel_context *ce,
struct i915_gem_context *ctx)
{
GEM_BUG_ON(rcu_access_pointer(ce->gem_context));
RCU_INIT_POINTER(ce->gem_context, ctx);
if (!test_bit(CONTEXT_ALLOC_BIT, &ce->flags))
ce->ring = __intel_context_ring_size(SZ_16K);
if (rcu_access_pointer(ctx->vm)) {
struct i915_address_space *vm;
rcu_read_lock();
vm = context_get_vm_rcu(ctx); /* hmm */
rcu_read_unlock();
i915_vm_put(ce->vm);
ce->vm = vm;
}
GEM_BUG_ON(ce->timeline);
if (ctx->timeline)
ce->timeline = intel_timeline_get(ctx->timeline);
if (ctx->sched.priority >= I915_PRIORITY_NORMAL &&
intel_engine_has_semaphores(ce->engine))
__set_bit(CONTEXT_USE_SEMAPHORES, &ce->flags);
}
static void __free_engines(struct i915_gem_engines *e, unsigned int count)
{
while (count--) {
if (!e->engines[count])
continue;
RCU_INIT_POINTER(e->engines[count]->gem_context, NULL);
intel_context_put(e->engines[count]);
}
kfree(e);
@ -211,12 +281,14 @@ static struct i915_gem_engines *default_engines(struct i915_gem_context *ctx)
GEM_BUG_ON(engine->legacy_idx >= I915_NUM_ENGINES);
GEM_BUG_ON(e->engines[engine->legacy_idx]);
ce = intel_context_create(ctx, engine);
ce = intel_context_create(engine);
if (IS_ERR(ce)) {
__free_engines(e, e->num_engines + 1);
return ERR_CAST(ce);
}
intel_context_set_gem(ce, ctx);
e->engines[engine->legacy_idx] = ce;
e->num_engines = max(e->num_engines, engine->legacy_idx);
}
@ -236,14 +308,10 @@ static void i915_gem_context_free(struct i915_gem_context *ctx)
free_engines(rcu_access_pointer(ctx->engines));
mutex_destroy(&ctx->engines_mutex);
kfree(ctx->jump_whitelist);
if (ctx->timeline)
intel_timeline_put(ctx->timeline);
kfree(ctx->name);
put_pid(ctx->pid);
mutex_destroy(&ctx->mutex);
kfree_rcu(ctx, rcu);
@ -388,15 +456,6 @@ static void kill_context(struct i915_gem_context *ctx)
struct i915_gem_engines_iter it;
struct intel_context *ce;
/*
* If we are already banned, it was due to a guilty request causing
* a reset and the entire context being evicted from the GPU.
*/
if (i915_gem_context_is_banned(ctx))
return;
i915_gem_context_set_banned(ctx);
/*
* Map the user's engine back to the actual engines; one virtual
* engine will be mapped to multiple engines, and using ctx->engine[]
@ -407,6 +466,9 @@ static void kill_context(struct i915_gem_context *ctx)
for_each_gem_engine(ce, __context_engines_static(ctx), it) {
struct intel_engine_cs *engine;
if (intel_context_set_banned(ce))
continue;
/*
* Check the current active state of this context; if we
* are currently executing on the GPU we need to evict
@ -427,11 +489,29 @@ static void kill_context(struct i915_gem_context *ctx)
}
}
static void set_closed_name(struct i915_gem_context *ctx)
{
char *s;
/* Replace '[]' with '<>' to indicate closed in debug prints */
s = strrchr(ctx->name, '[');
if (!s)
return;
*s = '<';
s = strchr(s + 1, ']');
if (s)
*s = '>';
}
static void context_close(struct i915_gem_context *ctx)
{
struct i915_address_space *vm;
i915_gem_context_set_closed(ctx);
set_closed_name(ctx);
mutex_lock(&ctx->mutex);
@ -529,9 +609,6 @@ __create_context(struct drm_i915_private *i915)
for (i = 0; i < ARRAY_SIZE(ctx->hang_timestamp); i++)
ctx->hang_timestamp[i] = jiffies - CONTEXT_FAST_HANG_JIFFIES;
ctx->jump_whitelist = NULL;
ctx->jump_whitelist_cmds = 0;
spin_lock(&i915->gem.contexts.lock);
list_add_tail(&ctx->link, &i915->gem.contexts.list);
spin_unlock(&i915->gem.contexts.lock);
@ -661,37 +738,6 @@ i915_gem_create_context(struct drm_i915_private *i915, unsigned int flags)
return ctx;
}
static void
destroy_kernel_context(struct i915_gem_context **ctxp)
{
struct i915_gem_context *ctx;
/* Keep the context ref so that we can free it immediately ourselves */
ctx = i915_gem_context_get(fetch_and_zero(ctxp));
GEM_BUG_ON(!i915_gem_context_is_kernel(ctx));
context_close(ctx);
i915_gem_context_free(ctx);
}
struct i915_gem_context *
i915_gem_context_create_kernel(struct drm_i915_private *i915, int prio)
{
struct i915_gem_context *ctx;
ctx = i915_gem_create_context(i915, 0);
if (IS_ERR(ctx))
return ctx;
i915_gem_context_clear_bannable(ctx);
i915_gem_context_set_persistence(ctx);
ctx->sched.priority = I915_USER_PRIORITY(prio);
GEM_BUG_ON(!i915_gem_context_is_kernel(ctx));
return ctx;
}
static void init_contexts(struct i915_gem_contexts *gc)
{
spin_lock_init(&gc->lock);
@ -701,32 +747,16 @@ static void init_contexts(struct i915_gem_contexts *gc)
init_llist_head(&gc->free_list);
}
int i915_gem_init_contexts(struct drm_i915_private *i915)
void i915_gem_init__contexts(struct drm_i915_private *i915)
{
struct i915_gem_context *ctx;
/* Reassure ourselves we are only called once */
GEM_BUG_ON(i915->kernel_context);
init_contexts(&i915->gem.contexts);
/* lowest priority; idle task */
ctx = i915_gem_context_create_kernel(i915, I915_PRIORITY_MIN);
if (IS_ERR(ctx)) {
DRM_ERROR("Failed to create default global context\n");
return PTR_ERR(ctx);
}
i915->kernel_context = ctx;
DRM_DEBUG_DRIVER("%s context support initialized\n",
DRIVER_CAPS(i915)->has_logical_contexts ?
"logical" : "fake");
return 0;
}
void i915_gem_driver_release__contexts(struct drm_i915_private *i915)
{
destroy_kernel_context(&i915->kernel_context);
flush_work(&i915->gem.contexts.free_work);
}
@ -757,12 +787,8 @@ static int gem_context_register(struct i915_gem_context *ctx,
mutex_unlock(&ctx->mutex);
ctx->pid = get_task_pid(current, PIDTYPE_PID);
ctx->name = kasprintf(GFP_KERNEL, "%s[%d]",
current->comm, pid_nr(ctx->pid));
if (!ctx->name) {
ret = -ENOMEM;
goto err_pid;
}
snprintf(ctx->name, sizeof(ctx->name), "%s[%d]",
current->comm, pid_nr(ctx->pid));
/* And finally expose ourselves to userspace via the idr */
mutex_lock(&fpriv->context_idr_lock);
@ -771,8 +797,6 @@ static int gem_context_register(struct i915_gem_context *ctx,
if (ret >= 0)
goto out;
kfree(fetch_and_zero(&ctx->name));
err_pid:
put_pid(fetch_and_zero(&ctx->pid));
out:
return ret;
@ -801,7 +825,6 @@ int i915_gem_context_open(struct drm_i915_private *i915,
if (err < 0)
goto err_ctx;
GEM_BUG_ON(i915_gem_context_is_kernel(ctx));
GEM_BUG_ON(err > 0);
return 0;
@ -1012,7 +1035,7 @@ static int get_ppgtt(struct drm_i915_file_private *file_priv,
return -ENODEV;
rcu_read_lock();
vm = i915_vm_get(ctx->vm);
vm = context_get_vm_rcu(ctx);
rcu_read_unlock();
ret = mutex_lock_interruptible(&file_priv->vm_idr_lock);
@ -1049,7 +1072,7 @@ static void set_ppgtt_barrier(void *data)
static int emit_ppgtt_update(struct i915_request *rq, void *data)
{
struct i915_address_space *vm = rq->hw_context->vm;
struct i915_address_space *vm = rq->context->vm;
struct intel_engine_cs *engine = rq->engine;
u32 base = engine->mmio_base;
u32 *cs;
@ -1096,9 +1119,6 @@ static int emit_ppgtt_update(struct i915_request *rq, void *data)
}
*cs++ = MI_NOOP;
intel_ring_advance(rq, cs);
} else {
/* ppGTT is not part of the legacy context image */
gen6_ppgtt_pin(i915_vm_to_ppgtt(vm));
}
return 0;
@ -1106,10 +1126,20 @@ static int emit_ppgtt_update(struct i915_request *rq, void *data)
static bool skip_ppgtt_update(struct intel_context *ce, void *data)
{
if (!test_bit(CONTEXT_ALLOC_BIT, &ce->flags))
return true;
if (HAS_LOGICAL_RING_CONTEXTS(ce->engine->i915))
return !ce->state;
else
return !atomic_read(&ce->pin_count);
return false;
if (!atomic_read(&ce->pin_count))
return true;
/* ppGTT is not part of the legacy context image */
if (gen6_ppgtt_pin(i915_vm_to_ppgtt(ce->vm)))
return true;
return false;
}
static int set_ppgtt(struct drm_i915_file_private *file_priv,
@ -1217,7 +1247,7 @@ gen8_modify_rpcs(struct intel_context *ce, struct intel_sseu sseu)
if (!intel_context_is_pinned(ce))
return 0;
rq = i915_request_create(ce->engine->kernel_context);
rq = intel_engine_create_kernel_request(ce->engine);
if (IS_ERR(rq))
return PTR_ERR(rq);
@ -1485,12 +1515,14 @@ set_engines__load_balance(struct i915_user_extension __user *base, void *data)
}
}
ce = intel_execlists_create_virtual(set->ctx, siblings, n);
ce = intel_execlists_create_virtual(siblings, n);
if (IS_ERR(ce)) {
err = PTR_ERR(ce);
goto out_siblings;
}
intel_context_set_gem(ce, set->ctx);
if (cmpxchg(&set->engines->engines[idx], NULL, ce)) {
intel_context_put(ce);
err = -EEXIST;
@ -1660,12 +1692,14 @@ set_engines(struct i915_gem_context *ctx,
return -ENOENT;
}
ce = intel_context_create(ctx, engine);
ce = intel_context_create(engine);
if (IS_ERR(ce)) {
__free_engines(set.engines, n);
return PTR_ERR(ce);
}
intel_context_set_gem(ce, ctx);
set.engines->engines[n] = ce;
}
set.engines->num_engines = num_engines;
@ -1806,6 +1840,44 @@ set_persistence(struct i915_gem_context *ctx,
return __context_set_persistence(ctx, args->value);
}
static void __apply_priority(struct intel_context *ce, void *arg)
{
struct i915_gem_context *ctx = arg;
if (!intel_engine_has_semaphores(ce->engine))
return;
if (ctx->sched.priority >= I915_PRIORITY_NORMAL)
intel_context_set_use_semaphores(ce);
else
intel_context_clear_use_semaphores(ce);
}
static int set_priority(struct i915_gem_context *ctx,
const struct drm_i915_gem_context_param *args)
{
s64 priority = args->value;
if (args->size)
return -EINVAL;
if (!(ctx->i915->caps.scheduler & I915_SCHEDULER_CAP_PRIORITY))
return -ENODEV;
if (priority > I915_CONTEXT_MAX_USER_PRIORITY ||
priority < I915_CONTEXT_MIN_USER_PRIORITY)
return -EINVAL;
if (priority > I915_CONTEXT_DEFAULT_PRIORITY &&
!capable(CAP_SYS_NICE))
return -EPERM;
ctx->sched.priority = I915_USER_PRIORITY(priority);
context_apply_all(ctx, __apply_priority, ctx);
return 0;
}
static int ctx_setparam(struct drm_i915_file_private *fpriv,
struct i915_gem_context *ctx,
struct drm_i915_gem_context_param *args)
@ -1852,23 +1924,7 @@ static int ctx_setparam(struct drm_i915_file_private *fpriv,
break;
case I915_CONTEXT_PARAM_PRIORITY:
{
s64 priority = args->value;
if (args->size)
ret = -EINVAL;
else if (!(ctx->i915->caps.scheduler & I915_SCHEDULER_CAP_PRIORITY))
ret = -ENODEV;
else if (priority > I915_CONTEXT_MAX_USER_PRIORITY ||
priority < I915_CONTEXT_MIN_USER_PRIORITY)
ret = -EINVAL;
else if (priority > I915_CONTEXT_DEFAULT_PRIORITY &&
!capable(CAP_SYS_NICE))
ret = -EPERM;
else
ctx->sched.priority =
I915_USER_PRIORITY(priority);
}
ret = set_priority(ctx, args);
break;
case I915_CONTEXT_PARAM_SSEU:
@ -1948,20 +2004,23 @@ static int clone_engines(struct i915_gem_context *dst,
*/
if (intel_engine_is_virtual(engine))
clone->engines[n] =
intel_execlists_clone_virtual(dst, engine);
intel_execlists_clone_virtual(engine);
else
clone->engines[n] = intel_context_create(dst, engine);
clone->engines[n] = intel_context_create(engine);
if (IS_ERR_OR_NULL(clone->engines[n])) {
__free_engines(clone, n);
goto err_unlock;
}
intel_context_set_gem(clone->engines[n], dst);
}
clone->num_engines = n;
user_engines = i915_gem_context_user_engines(src);
i915_gem_context_unlock_engines(src);
free_engines(dst->engines);
/* Serialised by constructor */
free_engines(__context_engines_static(dst));
RCU_INIT_POINTER(dst->engines, clone);
if (user_engines)
i915_gem_context_set_user_engines(dst);
@ -1996,7 +2055,8 @@ static int clone_sseu(struct i915_gem_context *dst,
unsigned long n;
int err;
clone = dst->engines; /* no locking required; sole access */
/* no locking required; sole access under constructor*/
clone = __context_engines_static(dst);
if (e->num_engines != clone->num_engines) {
err = -EINVAL;
goto unlock;
@ -2041,47 +2101,21 @@ static int clone_vm(struct i915_gem_context *dst,
struct i915_address_space *vm;
int err = 0;
if (!rcu_access_pointer(src->vm))
return 0;
rcu_read_lock();
do {
vm = rcu_dereference(src->vm);
if (!vm)
break;
if (!kref_get_unless_zero(&vm->ref))
continue;
/*
* This ppgtt may have be reallocated between
* the read and the kref, and reassigned to a third
* context. In order to avoid inadvertent sharing
* of this ppgtt with that third context (and not
* src), we have to confirm that we have the same
* ppgtt after passing through the strong memory
* barrier implied by a successful
* kref_get_unless_zero().
*
* Once we have acquired the current ppgtt of src,
* we no longer care if it is released from src, as
* it cannot be reallocated elsewhere.
*/
if (vm == rcu_access_pointer(src->vm))
break;
i915_vm_put(vm);
} while (1);
vm = context_get_vm_rcu(src);
rcu_read_unlock();
if (vm) {
if (!mutex_lock_interruptible(&dst->mutex)) {
__assign_ppgtt(dst, vm);
mutex_unlock(&dst->mutex);
} else {
err = -EINTR;
}
i915_vm_put(vm);
if (!mutex_lock_interruptible(&dst->mutex)) {
__assign_ppgtt(dst, vm);
mutex_unlock(&dst->mutex);
} else {
err = -EINTR;
}
i915_vm_put(vm);
return err;
}
@ -2167,8 +2201,7 @@ int i915_gem_context_create_ioctl(struct drm_device *dev, void *data,
ext_data.fpriv = file->driver_priv;
if (client_is_banned(ext_data.fpriv)) {
DRM_DEBUG("client %s[%d] banned from creating ctx\n",
current->comm,
pid_nr(get_task_pid(current, PIDTYPE_PID)));
current->comm, task_pid_nr(current));
return -EIO;
}

Просмотреть файл

@ -91,26 +91,6 @@ static inline void i915_gem_context_clear_persistence(struct i915_gem_context *c
clear_bit(UCONTEXT_PERSISTENCE, &ctx->user_flags);
}
static inline bool i915_gem_context_is_banned(const struct i915_gem_context *ctx)
{
return test_bit(CONTEXT_BANNED, &ctx->flags);
}
static inline void i915_gem_context_set_banned(struct i915_gem_context *ctx)
{
set_bit(CONTEXT_BANNED, &ctx->flags);
}
static inline bool i915_gem_context_force_single_submission(const struct i915_gem_context *ctx)
{
return test_bit(CONTEXT_FORCE_SINGLE_SUBMISSION, &ctx->flags);
}
static inline void i915_gem_context_set_force_single_submission(struct i915_gem_context *ctx)
{
__set_bit(CONTEXT_FORCE_SINGLE_SUBMISSION, &ctx->flags);
}
static inline bool
i915_gem_context_user_engines(const struct i915_gem_context *ctx)
{
@ -129,31 +109,8 @@ i915_gem_context_clear_user_engines(struct i915_gem_context *ctx)
clear_bit(CONTEXT_USER_ENGINES, &ctx->flags);
}
static inline bool
i915_gem_context_nopreempt(const struct i915_gem_context *ctx)
{
return test_bit(CONTEXT_NOPREEMPT, &ctx->flags);
}
static inline void
i915_gem_context_set_nopreempt(struct i915_gem_context *ctx)
{
set_bit(CONTEXT_NOPREEMPT, &ctx->flags);
}
static inline void
i915_gem_context_clear_nopreempt(struct i915_gem_context *ctx)
{
clear_bit(CONTEXT_NOPREEMPT, &ctx->flags);
}
static inline bool i915_gem_context_is_kernel(struct i915_gem_context *ctx)
{
return !ctx->file_priv;
}
/* i915_gem_context.c */
int __must_check i915_gem_init_contexts(struct drm_i915_private *i915);
void i915_gem_init__contexts(struct drm_i915_private *i915);
void i915_gem_driver_release__contexts(struct drm_i915_private *i915);
int i915_gem_context_open(struct drm_i915_private *i915,
@ -178,9 +135,6 @@ int i915_gem_context_setparam_ioctl(struct drm_device *dev, void *data,
int i915_gem_context_reset_stats_ioctl(struct drm_device *dev, void *data,
struct drm_file *file);
struct i915_gem_context *
i915_gem_context_create_kernel(struct drm_i915_private *i915, int prio);
static inline struct i915_gem_context *
i915_gem_context_get(struct i915_gem_context *ctx)
{

Просмотреть файл

@ -100,15 +100,6 @@ struct i915_gem_context {
*/
struct pid *pid;
/**
* @name: arbitrary name
*
* A name is constructed for the context from the creator's process
* name, pid and user handle in order to uniquely identify the
* context in messages.
*/
const char *name;
/** link: place with &drm_i915_private.context_list */
struct list_head link;
struct llist_node free_link;
@ -143,11 +134,8 @@ struct i915_gem_context {
* @flags: small set of booleans
*/
unsigned long flags;
#define CONTEXT_BANNED 0
#define CONTEXT_CLOSED 1
#define CONTEXT_FORCE_SINGLE_SUBMISSION 2
#define CONTEXT_USER_ENGINES 3
#define CONTEXT_NOPREEMPT 4
#define CONTEXT_CLOSED 0
#define CONTEXT_USER_ENGINES 1
struct mutex mutex;
@ -177,12 +165,14 @@ struct i915_gem_context {
*/
struct radix_tree_root handles_vma;
/** jump_whitelist: Bit array for tracking cmds during cmdparsing
* Guarded by struct_mutex
/**
* @name: arbitrary name, used for user debug
*
* A name is constructed for the context from the creator's process
* name, pid and user handle in order to uniquely identify the
* context in messages.
*/
unsigned long *jump_whitelist;
/** jump_whitelist_cmds: No of cmd slots available */
u32 jump_whitelist_cmds;
char name[TASK_COMM_LEN + 8];
};
#endif /* __I915_GEM_CONTEXT_TYPES_H__ */

Просмотреть файл

@ -12,6 +12,8 @@
#include "i915_gem_ioctls.h"
#include "i915_gem_object.h"
#include "i915_vma.h"
#include "i915_gem_lmem.h"
#include "i915_gem_mman.h"
static void __i915_gem_object_flush_for_display(struct drm_i915_gem_object *obj)
{
@ -148,9 +150,17 @@ i915_gem_object_set_to_gtt_domain(struct drm_i915_gem_object *obj, bool write)
GEM_BUG_ON((obj->write_domain & ~I915_GEM_DOMAIN_GTT) != 0);
obj->read_domains |= I915_GEM_DOMAIN_GTT;
if (write) {
struct i915_vma *vma;
obj->read_domains = I915_GEM_DOMAIN_GTT;
obj->write_domain = I915_GEM_DOMAIN_GTT;
obj->mm.dirty = true;
spin_lock(&obj->vma.lock);
for_each_ggtt_vma(vma, obj)
if (i915_vma_is_bound(vma, I915_VMA_GLOBAL_BIND))
i915_vma_set_ggtt_write(vma);
spin_unlock(&obj->vma.lock);
}
i915_gem_object_unpin_pages(obj);
@ -175,138 +185,34 @@ i915_gem_object_set_to_gtt_domain(struct drm_i915_gem_object *obj, bool write)
int i915_gem_object_set_cache_level(struct drm_i915_gem_object *obj,
enum i915_cache_level cache_level)
{
struct i915_vma *vma;
int ret;
assert_object_held(obj);
if (obj->cache_level == cache_level)
return 0;
/* Inspect the list of currently bound VMA and unbind any that would
* be invalid given the new cache-level. This is principally to
* catch the issue of the CS prefetch crossing page boundaries and
* reading an invalid PTE on older architectures.
*/
restart:
list_for_each_entry(vma, &obj->vma.list, obj_link) {
if (!drm_mm_node_allocated(&vma->node))
continue;
ret = i915_gem_object_wait(obj,
I915_WAIT_INTERRUPTIBLE |
I915_WAIT_ALL,
MAX_SCHEDULE_TIMEOUT);
if (ret)
return ret;
if (i915_vma_is_pinned(vma)) {
DRM_DEBUG("can not change the cache level of pinned objects\n");
return -EBUSY;
}
ret = i915_gem_object_lock_interruptible(obj);
if (ret)
return ret;
if (!i915_vma_is_closed(vma) &&
i915_gem_valid_gtt_space(vma, cache_level))
continue;
ret = i915_vma_unbind(vma);
if (ret)
return ret;
/* As unbinding may affect other elements in the
* obj->vma_list (due to side-effects from retiring
* an active vma), play safe and restart the iterator.
*/
goto restart;
/* Always invalidate stale cachelines */
if (obj->cache_level != cache_level) {
i915_gem_object_set_cache_coherency(obj, cache_level);
obj->cache_dirty = true;
}
/* We can reuse the existing drm_mm nodes but need to change the
* cache-level on the PTE. We could simply unbind them all and
* rebind with the correct cache-level on next use. However since
* we already have a valid slot, dma mapping, pages etc, we may as
* rewrite the PTE in the belief that doing so tramples upon less
* state and so involves less work.
*/
if (atomic_read(&obj->bind_count)) {
struct drm_i915_private *i915 = to_i915(obj->base.dev);
i915_gem_object_unlock(obj);
/* Before we change the PTE, the GPU must not be accessing it.
* If we wait upon the object, we know that all the bound
* VMA are no longer active.
*/
ret = i915_gem_object_wait(obj,
I915_WAIT_INTERRUPTIBLE |
I915_WAIT_ALL,
MAX_SCHEDULE_TIMEOUT);
if (ret)
return ret;
if (!HAS_LLC(i915) && cache_level != I915_CACHE_NONE) {
intel_wakeref_t wakeref =
intel_runtime_pm_get(&i915->runtime_pm);
/*
* Access to snoopable pages through the GTT is
* incoherent and on some machines causes a hard
* lockup. Relinquish the CPU mmaping to force
* userspace to refault in the pages and we can
* then double check if the GTT mapping is still
* valid for that pointer access.
*/
ret = mutex_lock_interruptible(&i915->ggtt.vm.mutex);
if (ret) {
intel_runtime_pm_put(&i915->runtime_pm,
wakeref);
return ret;
}
if (obj->userfault_count)
__i915_gem_object_release_mmap(obj);
/*
* As we no longer need a fence for GTT access,
* we can relinquish it now (and so prevent having
* to steal a fence from someone else on the next
* fence request). Note GPU activity would have
* dropped the fence as all snoopable access is
* supposed to be linear.
*/
for_each_ggtt_vma(vma, obj) {
ret = i915_vma_revoke_fence(vma);
if (ret)
break;
}
mutex_unlock(&i915->ggtt.vm.mutex);
intel_runtime_pm_put(&i915->runtime_pm, wakeref);
if (ret)
return ret;
} else {
/*
* We either have incoherent backing store and
* so no GTT access or the architecture is fully
* coherent. In such cases, existing GTT mmaps
* ignore the cache bit in the PTE and we can
* rewrite it without confusing the GPU or having
* to force userspace to fault back in its mmaps.
*/
}
list_for_each_entry(vma, &obj->vma.list, obj_link) {
if (!drm_mm_node_allocated(&vma->node))
continue;
/* Wait for an earlier async bind, need to rewrite it */
ret = i915_vma_sync(vma);
if (ret)
return ret;
ret = i915_vma_bind(vma, cache_level, PIN_UPDATE, NULL);
if (ret)
return ret;
}
}
list_for_each_entry(vma, &obj->vma.list, obj_link) {
if (i915_vm_has_cache_coloring(vma->vm))
vma->node.color = cache_level;
}
i915_gem_object_set_cache_coherency(obj, cache_level);
obj->cache_dirty = true; /* Always invalidate stale cachelines */
return 0;
/* The cache-level will be applied when each vma is rebound. */
return i915_gem_object_unbind(obj,
I915_GEM_OBJECT_UNBIND_ACTIVE |
I915_GEM_OBJECT_UNBIND_BARRIER);
}
int i915_gem_get_caching_ioctl(struct drm_device *dev, void *data,
@ -387,20 +293,7 @@ int i915_gem_set_caching_ioctl(struct drm_device *dev, void *data,
goto out;
}
if (obj->cache_level == level)
goto out;
ret = i915_gem_object_wait(obj,
I915_WAIT_INTERRUPTIBLE,
MAX_SCHEDULE_TIMEOUT);
if (ret)
goto out;
ret = i915_gem_object_lock_interruptible(obj);
if (ret == 0) {
ret = i915_gem_object_set_cache_level(obj, level);
i915_gem_object_unlock(obj);
}
ret = i915_gem_object_set_cache_level(obj, level);
out:
i915_gem_object_put(obj);
@ -419,10 +312,13 @@ i915_gem_object_pin_to_display_plane(struct drm_i915_gem_object *obj,
const struct i915_ggtt_view *view,
unsigned int flags)
{
struct drm_i915_private *i915 = to_i915(obj->base.dev);
struct i915_vma *vma;
int ret;
assert_object_held(obj);
/* Frame buffer must be in LMEM (no migration yet) */
if (HAS_LMEM(i915) && !i915_gem_object_is_lmem(obj))
return ERR_PTR(-EINVAL);
/*
* The display engine is not coherent with the LLC cache on gen6. As
@ -435,7 +331,7 @@ i915_gem_object_pin_to_display_plane(struct drm_i915_gem_object *obj,
* with that bit in the PTE to main memory with just one PIPE_CONTROL.
*/
ret = i915_gem_object_set_cache_level(obj,
HAS_WT(to_i915(obj->base.dev)) ?
HAS_WT(i915) ?
I915_CACHE_WT : I915_CACHE_NONE);
if (ret)
return ERR_PTR(ret);
@ -462,13 +358,7 @@ i915_gem_object_pin_to_display_plane(struct drm_i915_gem_object *obj,
vma->display_alignment = max_t(u64, vma->display_alignment, alignment);
__i915_gem_object_flush_for_display(obj);
/*
* It should now be out of any other write domains, and we can update
* the domain values for our changes.
*/
obj->read_domains |= I915_GEM_DOMAIN_GTT;
i915_gem_object_flush_if_display(obj);
return vma;
}
@ -479,8 +369,11 @@ static void i915_gem_object_bump_inactive_ggtt(struct drm_i915_gem_object *obj)
struct i915_vma *vma;
GEM_BUG_ON(!i915_gem_object_has_pinned_pages(obj));
if (!atomic_read(&obj->bind_count))
return;
mutex_lock(&i915->ggtt.vm.mutex);
spin_lock(&obj->vma.lock);
for_each_ggtt_vma(vma, obj) {
if (!drm_mm_node_allocated(&vma->node))
continue;
@ -488,6 +381,7 @@ static void i915_gem_object_bump_inactive_ggtt(struct drm_i915_gem_object *obj)
GEM_BUG_ON(vma->vm != &i915->ggtt.vm);
list_move_tail(&vma->vm_link, &vma->vm->bound_list);
}
spin_unlock(&obj->vma.lock);
mutex_unlock(&i915->ggtt.vm.mutex);
if (i915_gem_object_is_shrinkable(obj)) {
@ -664,7 +558,7 @@ i915_gem_set_domain_ioctl(struct drm_device *dev, void *data,
i915_gem_object_unlock(obj);
if (write_domain)
intel_frontbuffer_invalidate(obj->frontbuffer, ORIGIN_CPU);
i915_gem_object_invalidate_frontbuffer(obj, ORIGIN_CPU);
out_unpin:
i915_gem_object_unpin_pages(obj);
@ -784,7 +678,7 @@ int i915_gem_object_prepare_write(struct drm_i915_gem_object *obj,
}
out:
intel_frontbuffer_invalidate(obj->frontbuffer, ORIGIN_CPU);
i915_gem_object_invalidate_frontbuffer(obj, ORIGIN_CPU);
obj->mm.dirty = true;
/* return with the pages pinned */
return 0;

Просмотреть файл

@ -25,6 +25,7 @@
#include "i915_gem_clflush.h"
#include "i915_gem_context.h"
#include "i915_gem_ioctls.h"
#include "i915_sw_fence_work.h"
#include "i915_trace.h"
enum {
@ -228,6 +229,7 @@ struct i915_execbuffer {
struct i915_request *request; /** our request to build */
struct i915_vma *batch; /** identity of the batch obj/vma */
struct i915_vma *trampoline; /** trampoline used for chaining */
/** actual size of execobj[] as we may extend it for the cmdparser */
unsigned int buffer_count;
@ -253,7 +255,6 @@ struct i915_execbuffer {
bool has_fence : 1;
bool needs_unfenced : 1;
struct intel_context *ce;
struct i915_request *rq;
u32 *rq_cmd;
unsigned int rq_size;
@ -277,25 +278,6 @@ struct i915_execbuffer {
#define exec_entry(EB, VMA) (&(EB)->exec[(VMA)->exec_flags - (EB)->flags])
/*
* Used to convert any address to canonical form.
* Starting from gen8, some commands (e.g. STATE_BASE_ADDRESS,
* MI_LOAD_REGISTER_MEM and others, see Broadwell PRM Vol2a) require the
* addresses to be in a canonical form:
* "GraphicsAddress[63:48] are ignored by the HW and assumed to be in correct
* canonical form [63:48] == [47]."
*/
#define GEN8_HIGH_ADDRESS_BIT 47
static inline u64 gen8_canonical_addr(u64 address)
{
return sign_extend64(address, GEN8_HIGH_ADDRESS_BIT);
}
static inline u64 gen8_noncanonical_addr(u64 address)
{
return address & GENMASK_ULL(GEN8_HIGH_ADDRESS_BIT, 0);
}
static inline bool eb_use_cmdparser(const struct i915_execbuffer *eb)
{
return intel_engine_requires_cmd_parser(eb->engine) ||
@ -748,9 +730,6 @@ static int eb_lookup_vmas(struct i915_execbuffer *eb)
unsigned int i, batch;
int err;
if (unlikely(i915_gem_context_is_banned(eb->gem_context)))
return -EIO;
INIT_LIST_HEAD(&eb->relocs);
INIT_LIST_HEAD(&eb->unbound);
@ -886,9 +865,6 @@ static void eb_destroy(const struct i915_execbuffer *eb)
{
GEM_BUG_ON(eb->reloc_cache.rq);
if (eb->reloc_cache.ce)
intel_context_put(eb->reloc_cache.ce);
if (eb->lut_size > 0)
kfree(eb->buckets);
}
@ -912,7 +888,6 @@ static void reloc_cache_init(struct reloc_cache *cache,
cache->has_fence = cache->gen < 4;
cache->needs_unfenced = INTEL_INFO(i915)->unfenced_needs_alignment;
cache->node.flags = 0;
cache->ce = NULL;
cache->rq = NULL;
cache->rq_size = 0;
}
@ -1182,7 +1157,7 @@ static int __reloc_gpu_alloc(struct i915_execbuffer *eb,
if (err)
goto err_unmap;
rq = intel_context_create_request(cache->ce);
rq = i915_request_create(eb->context);
if (IS_ERR(rq)) {
err = PTR_ERR(rq);
goto err_unpin;
@ -1246,36 +1221,9 @@ static u32 *reloc_gpu(struct i915_execbuffer *eb,
if (unlikely(!cache->rq)) {
int err;
/* If we need to copy for the cmdparser, we will stall anyway */
if (eb_use_cmdparser(eb))
return ERR_PTR(-EWOULDBLOCK);
if (!intel_engine_can_store_dword(eb->engine))
return ERR_PTR(-ENODEV);
if (!cache->ce) {
struct intel_context *ce;
/*
* The CS pre-parser can pre-fetch commands across
* memory sync points and starting gen12 it is able to
* pre-fetch across BB_START and BB_END boundaries
* (within the same context). We therefore use a
* separate context gen12+ to guarantee that the reloc
* writes land before the parser gets to the target
* memory location.
*/
if (cache->gen >= 12)
ce = intel_context_create(eb->context->gem_context,
eb->engine);
else
ce = intel_context_get(eb->context);
if (IS_ERR(ce))
return ERR_CAST(ce);
cache->ce = ce;
}
err = __reloc_gpu_alloc(eb, vma, len);
if (unlikely(err))
return ERR_PTR(err);
@ -1943,15 +1891,15 @@ err_skip:
return err;
}
static bool i915_gem_check_execbuffer(struct drm_i915_gem_execbuffer2 *exec)
static int i915_gem_check_execbuffer(struct drm_i915_gem_execbuffer2 *exec)
{
if (exec->flags & __I915_EXEC_ILLEGAL_FLAGS)
return false;
return -EINVAL;
/* Kernel clipping was a DRI1 misfeature */
if (!(exec->flags & I915_EXEC_FENCE_ARRAY)) {
if (exec->num_cliprects || exec->cliprects_ptr)
return false;
return -EINVAL;
}
if (exec->DR4 == 0xffffffff) {
@ -1959,12 +1907,12 @@ static bool i915_gem_check_execbuffer(struct drm_i915_gem_execbuffer2 *exec)
exec->DR4 = 0;
}
if (exec->DR1 || exec->DR4)
return false;
return -EINVAL;
if ((exec->batch_start_offset | exec->batch_len) & 0x7)
return false;
return -EINVAL;
return true;
return 0;
}
static int i915_reset_gen7_sol_offsets(struct i915_request *rq)
@ -1993,99 +1941,179 @@ static int i915_reset_gen7_sol_offsets(struct i915_request *rq)
}
static struct i915_vma *
shadow_batch_pin(struct i915_execbuffer *eb, struct drm_i915_gem_object *obj)
shadow_batch_pin(struct drm_i915_gem_object *obj,
struct i915_address_space *vm,
unsigned int flags)
{
struct drm_i915_private *dev_priv = eb->i915;
struct i915_vma * const vma = *eb->vma;
struct i915_address_space *vm;
u64 flags;
/*
* PPGTT backed shadow buffers must be mapped RO, to prevent
* post-scan tampering
*/
if (CMDPARSER_USES_GGTT(dev_priv)) {
flags = PIN_GLOBAL;
vm = &dev_priv->ggtt.vm;
} else if (vma->vm->has_read_only) {
flags = PIN_USER;
vm = vma->vm;
i915_gem_object_set_readonly(obj);
} else {
DRM_DEBUG("Cannot prevent post-scan tampering without RO capable vm\n");
return ERR_PTR(-EINVAL);
}
return i915_gem_object_pin(obj, vm, NULL, 0, 0, flags);
}
static struct i915_vma *eb_parse(struct i915_execbuffer *eb)
{
struct intel_engine_pool_node *pool;
struct i915_vma *vma;
u64 batch_start;
u64 shadow_batch_start;
int err;
pool = intel_engine_get_pool(eb->engine, eb->batch_len);
if (IS_ERR(pool))
return ERR_CAST(pool);
vma = shadow_batch_pin(eb, pool->obj);
vma = i915_vma_instance(obj, vm, NULL);
if (IS_ERR(vma))
goto err;
return vma;
batch_start = gen8_canonical_addr(eb->batch->node.start) +
eb->batch_start_offset;
err = i915_vma_pin(vma, 0, 0, flags);
if (err)
return ERR_PTR(err);
shadow_batch_start = gen8_canonical_addr(vma->node.start);
return vma;
}
err = intel_engine_cmd_parser(eb->gem_context,
eb->engine,
eb->batch->obj,
batch_start,
eb->batch_start_offset,
eb->batch_len,
pool->obj,
shadow_batch_start);
struct eb_parse_work {
struct dma_fence_work base;
struct intel_engine_cs *engine;
struct i915_vma *batch;
struct i915_vma *shadow;
struct i915_vma *trampoline;
unsigned int batch_offset;
unsigned int batch_length;
};
if (err) {
i915_vma_unpin(vma);
static int __eb_parse(struct dma_fence_work *work)
{
struct eb_parse_work *pw = container_of(work, typeof(*pw), base);
return intel_engine_cmd_parser(pw->engine,
pw->batch,
pw->batch_offset,
pw->batch_length,
pw->shadow,
pw->trampoline);
}
static const struct dma_fence_work_ops eb_parse_ops = {
.name = "eb_parse",
.work = __eb_parse,
};
static int eb_parse_pipeline(struct i915_execbuffer *eb,
struct i915_vma *shadow,
struct i915_vma *trampoline)
{
struct eb_parse_work *pw;
int err;
pw = kzalloc(sizeof(*pw), GFP_KERNEL);
if (!pw)
return -ENOMEM;
dma_fence_work_init(&pw->base, &eb_parse_ops);
pw->engine = eb->engine;
pw->batch = eb->batch;
pw->batch_offset = eb->batch_start_offset;
pw->batch_length = eb->batch_len;
pw->shadow = shadow;
pw->trampoline = trampoline;
dma_resv_lock(pw->batch->resv, NULL);
err = dma_resv_reserve_shared(pw->batch->resv, 1);
if (err)
goto err_batch_unlock;
/* Wait for all writes (and relocs) into the batch to complete */
err = i915_sw_fence_await_reservation(&pw->base.chain,
pw->batch->resv, NULL, false,
0, I915_FENCE_GFP);
if (err < 0)
goto err_batch_unlock;
/* Keep the batch alive and unwritten as we parse */
dma_resv_add_shared_fence(pw->batch->resv, &pw->base.dma);
dma_resv_unlock(pw->batch->resv);
/* Force execution to wait for completion of the parser */
dma_resv_lock(shadow->resv, NULL);
dma_resv_add_excl_fence(shadow->resv, &pw->base.dma);
dma_resv_unlock(shadow->resv);
dma_fence_work_commit(&pw->base);
return 0;
err_batch_unlock:
dma_resv_unlock(pw->batch->resv);
kfree(pw);
return err;
}
static int eb_parse(struct i915_execbuffer *eb)
{
struct intel_engine_pool_node *pool;
struct i915_vma *shadow, *trampoline;
unsigned int len;
int err;
if (!eb_use_cmdparser(eb))
return 0;
len = eb->batch_len;
if (!CMDPARSER_USES_GGTT(eb->i915)) {
/*
* Unsafe GGTT-backed buffers can still be submitted safely
* as non-secure.
* For PPGTT backing however, we have no choice but to forcibly
* reject unsafe buffers
* ppGTT backed shadow buffers must be mapped RO, to prevent
* post-scan tampering
*/
if (CMDPARSER_USES_GGTT(eb->i915) && (err == -EACCES))
/* Execute original buffer non-secure */
vma = NULL;
else
vma = ERR_PTR(err);
goto err;
if (!eb->context->vm->has_read_only) {
DRM_DEBUG("Cannot prevent post-scan tampering without RO capable vm\n");
return -EINVAL;
}
} else {
len += I915_CMD_PARSER_TRAMPOLINE_SIZE;
}
eb->vma[eb->buffer_count] = i915_vma_get(vma);
pool = intel_engine_get_pool(eb->engine, len);
if (IS_ERR(pool))
return PTR_ERR(pool);
shadow = shadow_batch_pin(pool->obj, eb->context->vm, PIN_USER);
if (IS_ERR(shadow)) {
err = PTR_ERR(shadow);
goto err;
}
i915_gem_object_set_readonly(shadow->obj);
trampoline = NULL;
if (CMDPARSER_USES_GGTT(eb->i915)) {
trampoline = shadow;
shadow = shadow_batch_pin(pool->obj,
&eb->engine->gt->ggtt->vm,
PIN_GLOBAL);
if (IS_ERR(shadow)) {
err = PTR_ERR(shadow);
shadow = trampoline;
goto err_shadow;
}
eb->batch_flags |= I915_DISPATCH_SECURE;
}
err = eb_parse_pipeline(eb, shadow, trampoline);
if (err)
goto err_trampoline;
eb->vma[eb->buffer_count] = i915_vma_get(shadow);
eb->flags[eb->buffer_count] =
__EXEC_OBJECT_HAS_PIN | __EXEC_OBJECT_HAS_REF;
vma->exec_flags = &eb->flags[eb->buffer_count];
shadow->exec_flags = &eb->flags[eb->buffer_count];
eb->buffer_count++;
eb->trampoline = trampoline;
eb->batch_start_offset = 0;
eb->batch = vma;
eb->batch = shadow;
if (CMDPARSER_USES_GGTT(eb->i915))
eb->batch_flags |= I915_DISPATCH_SECURE;
/* eb->batch_len unchanged */
vma->private = pool;
return vma;
shadow->private = pool;
return 0;
err_trampoline:
if (trampoline)
i915_vma_unpin(trampoline);
err_shadow:
i915_vma_unpin(shadow);
err:
intel_engine_pool_put(pool);
return vma;
return err;
}
static void
@ -2134,7 +2162,17 @@ static int eb_submit(struct i915_execbuffer *eb)
if (err)
return err;
if (i915_gem_context_nopreempt(eb->gem_context))
if (eb->trampoline) {
GEM_BUG_ON(eb->batch_start_offset);
err = eb->engine->emit_bb_start(eb->request,
eb->trampoline->node.start +
eb->batch_len,
0, 0);
if (err)
return err;
}
if (intel_context_nopreempt(eb->context))
eb->request->flags |= I915_REQUEST_NOPREEMPT;
return 0;
@ -2220,6 +2258,9 @@ static int __eb_pin_engine(struct i915_execbuffer *eb, struct intel_context *ce)
if (err)
return err;
if (unlikely(intel_context_is_banned(ce)))
return -EIO;
/*
* Pinning the contexts may generate requests in order to acquire
* GGTT space, so do this first before we reserve a seqno for
@ -2515,6 +2556,7 @@ i915_gem_do_execbuffer(struct drm_device *dev,
eb.buffer_count = args->buffer_count;
eb.batch_start_offset = args->batch_start_offset;
eb.batch_len = args->batch_len;
eb.trampoline = NULL;
eb.batch_flags = 0;
if (args->flags & I915_EXEC_SECURE) {
@ -2606,15 +2648,9 @@ i915_gem_do_execbuffer(struct drm_device *dev,
if (eb.batch_len == 0)
eb.batch_len = eb.batch->size - eb.batch_start_offset;
if (eb_use_cmdparser(&eb)) {
struct i915_vma *vma;
vma = eb_parse(&eb);
if (IS_ERR(vma)) {
err = PTR_ERR(vma);
goto err_vma;
}
}
err = eb_parse(&eb);
if (err)
goto err_vma;
/*
* snb/ivb/vlv conflate the "batch in ppgtt" bit with the "non-secure
@ -2694,6 +2730,7 @@ i915_gem_do_execbuffer(struct drm_device *dev,
err = eb_submit(&eb);
err_request:
add_to_client(eb.request, file);
i915_request_get(eb.request);
i915_request_add(eb.request);
if (fences)
@ -2709,6 +2746,7 @@ err_request:
fput(out_fence->file);
}
}
i915_request_put(eb.request);
err_batch_unpin:
if (eb.batch_flags & I915_DISPATCH_SECURE)
@ -2718,6 +2756,8 @@ err_batch_unpin:
err_vma:
if (eb.exec)
eb_release_vmas(&eb);
if (eb.trampoline)
i915_vma_unpin(eb.trampoline);
mutex_unlock(&dev->struct_mutex);
err_engine:
eb_unpin_engine(&eb);
@ -2787,8 +2827,9 @@ i915_gem_execbuffer_ioctl(struct drm_device *dev, void *data,
exec2.flags = I915_EXEC_RENDER;
i915_execbuffer2_set_context_id(exec2, 0);
if (!i915_gem_check_execbuffer(&exec2))
return -EINVAL;
err = i915_gem_check_execbuffer(&exec2);
if (err)
return err;
/* Copy in the exec list from userland */
exec_list = kvmalloc_array(count, sizeof(*exec_list),
@ -2865,8 +2906,9 @@ i915_gem_execbuffer2_ioctl(struct drm_device *dev, void *data,
return -EINVAL;
}
if (!i915_gem_check_execbuffer(args))
return -EINVAL;
err = i915_gem_check_execbuffer(args);
if (err)
return err;
/* Allocate an extra slot for use by the command parser */
exec2_list = kvmalloc_array(count + 1, eb_element_size(),

Просмотреть файл

@ -28,8 +28,8 @@ int i915_gem_madvise_ioctl(struct drm_device *dev, void *data,
struct drm_file *file);
int i915_gem_mmap_ioctl(struct drm_device *dev, void *data,
struct drm_file *file);
int i915_gem_mmap_gtt_ioctl(struct drm_device *dev, void *data,
struct drm_file *file);
int i915_gem_mmap_offset_ioctl(struct drm_device *dev, void *data,
struct drm_file *file);
int i915_gem_pread_ioctl(struct drm_device *dev, void *data,
struct drm_file *file);
int i915_gem_pwrite_ioctl(struct drm_device *dev, void *data,

Просмотреть файл

@ -79,9 +79,6 @@ __i915_gem_lmem_object_create(struct intel_memory_region *mem,
struct drm_i915_private *i915 = mem->i915;
struct drm_i915_gem_object *obj;
if (size > BIT(mem->mm.max_order) * mem->mm.chunk_size)
return ERR_PTR(-E2BIG);
obj = i915_gem_object_alloc();
if (!obj)
return ERR_PTR(-ENOMEM);

Просмотреть файл

@ -5,6 +5,7 @@
*/
#include <linux/mman.h>
#include <linux/pfn_t.h>
#include <linux/sizes.h>
#include "gt/intel_gt.h"
@ -14,7 +15,9 @@
#include "i915_gem_gtt.h"
#include "i915_gem_ioctls.h"
#include "i915_gem_object.h"
#include "i915_gem_mman.h"
#include "i915_trace.h"
#include "i915_user_extensions.h"
#include "i915_vma.h"
static inline bool
@ -144,6 +147,9 @@ static unsigned int tile_row_pages(const struct drm_i915_gem_object *obj)
* 3 - Remove implicit set-domain(GTT) and synchronisation on initial
* pagefault; swapin remains transparent.
*
* 4 - Support multiple fault handlers per object depending on object's
* backing storage (a.k.a. MMAP_OFFSET).
*
* Restrictions:
*
* * snoopable objects cannot be accessed via the GTT. It can cause machine
@ -171,7 +177,7 @@ static unsigned int tile_row_pages(const struct drm_i915_gem_object *obj)
*/
int i915_gem_mmap_gtt_version(void)
{
return 3;
return 4;
}
static inline struct i915_ggtt_view
@ -197,29 +203,83 @@ compute_partial_view(const struct drm_i915_gem_object *obj,
return view;
}
/**
* i915_gem_fault - fault a page into the GTT
* @vmf: fault info
*
* The fault handler is set up by drm_gem_mmap() when a object is GTT mapped
* from userspace. The fault handler takes care of binding the object to
* the GTT (if needed), allocating and programming a fence register (again,
* only if needed based on whether the old reg is still valid or the object
* is tiled) and inserting a new PTE into the faulting process.
*
* Note that the faulting process may involve evicting existing objects
* from the GTT and/or fence registers to make room. So performance may
* suffer if the GTT working set is large or there are few fence registers
* left.
*
* The current feature set supported by i915_gem_fault() and thus GTT mmaps
* is exposed via I915_PARAM_MMAP_GTT_VERSION (see i915_gem_mmap_gtt_version).
*/
vm_fault_t i915_gem_fault(struct vm_fault *vmf)
static vm_fault_t i915_error_to_vmf_fault(int err)
{
switch (err) {
default:
WARN_ONCE(err, "unhandled error in %s: %i\n", __func__, err);
/* fallthrough */
case -EIO: /* shmemfs failure from swap device */
case -EFAULT: /* purged object */
case -ENODEV: /* bad object, how did you get here! */
return VM_FAULT_SIGBUS;
case -ENOSPC: /* shmemfs allocation failure */
case -ENOMEM: /* our allocation failure */
return VM_FAULT_OOM;
case 0:
case -EAGAIN:
case -ERESTARTSYS:
case -EINTR:
case -EBUSY:
/*
* EBUSY is ok: this just means that another thread
* already did the job.
*/
return VM_FAULT_NOPAGE;
}
}
static vm_fault_t vm_fault_cpu(struct vm_fault *vmf)
{
struct vm_area_struct *area = vmf->vma;
struct i915_mmap_offset *mmo = area->vm_private_data;
struct drm_i915_gem_object *obj = mmo->obj;
unsigned long i, size = area->vm_end - area->vm_start;
bool write = area->vm_flags & VM_WRITE;
vm_fault_t ret = VM_FAULT_SIGBUS;
int err;
if (!i915_gem_object_has_struct_page(obj))
return ret;
/* Sanity check that we allow writing into this object */
if (i915_gem_object_is_readonly(obj) && write)
return ret;
err = i915_gem_object_pin_pages(obj);
if (err)
return i915_error_to_vmf_fault(err);
/* PTEs are revoked in obj->ops->put_pages() */
for (i = 0; i < size >> PAGE_SHIFT; i++) {
struct page *page = i915_gem_object_get_page(obj, i);
ret = vmf_insert_pfn(area,
(unsigned long)area->vm_start + i * PAGE_SIZE,
page_to_pfn(page));
if (ret != VM_FAULT_NOPAGE)
break;
}
if (write) {
GEM_BUG_ON(!i915_gem_object_has_pinned_pages(obj));
obj->cache_dirty = true; /* XXX flush after PAT update? */
obj->mm.dirty = true;
}
i915_gem_object_unpin_pages(obj);
return ret;
}
static vm_fault_t vm_fault_gtt(struct vm_fault *vmf)
{
#define MIN_CHUNK_PAGES (SZ_1M >> PAGE_SHIFT)
struct vm_area_struct *area = vmf->vma;
struct drm_i915_gem_object *obj = to_intel_bo(area->vm_private_data);
struct i915_mmap_offset *mmo = area->vm_private_data;
struct drm_i915_gem_object *obj = mmo->obj;
struct drm_device *dev = obj->base.dev;
struct drm_i915_private *i915 = to_i915(dev);
struct intel_runtime_pm *rpm = &i915->runtime_pm;
@ -312,6 +372,9 @@ vm_fault_t i915_gem_fault(struct vm_fault *vmf)
list_add(&obj->userfault_link, &i915->ggtt.userfault_list);
mutex_unlock(&i915->ggtt.vm.mutex);
/* Track the mmo associated with the fenced vma */
vma->mmo = mmo;
if (IS_ACTIVE(CONFIG_DRM_I915_USERFAULT_AUTOSUSPEND))
intel_wakeref_auto(&i915->ggtt.userfault_wakeref,
msecs_to_jiffies_timeout(CONFIG_DRM_I915_USERFAULT_AUTOSUSPEND));
@ -332,67 +395,36 @@ err_rpm:
intel_runtime_pm_put(rpm, wakeref);
i915_gem_object_unpin_pages(obj);
err:
switch (ret) {
default:
WARN_ONCE(ret, "unhandled error in %s: %i\n", __func__, ret);
/* fallthrough */
case -EIO: /* shmemfs failure from swap device */
case -EFAULT: /* purged object */
case -ENODEV: /* bad object, how did you get here! */
return VM_FAULT_SIGBUS;
case -ENOSPC: /* shmemfs allocation failure */
case -ENOMEM: /* our allocation failure */
return VM_FAULT_OOM;
case 0:
case -EAGAIN:
case -ERESTARTSYS:
case -EINTR:
case -EBUSY:
/*
* EBUSY is ok: this just means that another thread
* already did the job.
*/
return VM_FAULT_NOPAGE;
}
return i915_error_to_vmf_fault(ret);
}
void __i915_gem_object_release_mmap(struct drm_i915_gem_object *obj)
void __i915_gem_object_release_mmap_gtt(struct drm_i915_gem_object *obj)
{
struct i915_vma *vma;
GEM_BUG_ON(!obj->userfault_count);
obj->userfault_count = 0;
list_del(&obj->userfault_link);
drm_vma_node_unmap(&obj->base.vma_node,
obj->base.dev->anon_inode->i_mapping);
for_each_ggtt_vma(vma, obj)
i915_vma_unset_userfault(vma);
i915_vma_revoke_mmap(vma);
GEM_BUG_ON(obj->userfault_count);
}
/**
* i915_gem_object_release_mmap - remove physical page mappings
* @obj: obj in question
*
* Preserve the reservation of the mmapping with the DRM core code, but
* relinquish ownership of the pages back to the system.
*
/*
* It is vital that we remove the page mapping if we have mapped a tiled
* object through the GTT and then lose the fence register due to
* resource pressure. Similarly if the object has been moved out of the
* aperture, than pages mapped into userspace must be revoked. Removing the
* mapping will then trigger a page fault on the next user access, allowing
* fixup by i915_gem_fault().
* fixup by vm_fault_gtt().
*/
void i915_gem_object_release_mmap(struct drm_i915_gem_object *obj)
static void i915_gem_object_release_mmap_gtt(struct drm_i915_gem_object *obj)
{
struct drm_i915_private *i915 = to_i915(obj->base.dev);
intel_wakeref_t wakeref;
/* Serialisation between user GTT access and our code depends upon
/*
* Serialisation between user GTT access and our code depends upon
* revoking the CPU's PTE whilst the mutex is held. The next user
* pagefault then has to wait until we release the mutex.
*
@ -406,9 +438,10 @@ void i915_gem_object_release_mmap(struct drm_i915_gem_object *obj)
if (!obj->userfault_count)
goto out;
__i915_gem_object_release_mmap(obj);
__i915_gem_object_release_mmap_gtt(obj);
/* Ensure that the CPU's PTE are revoked and there are not outstanding
/*
* Ensure that the CPU's PTE are revoked and there are not outstanding
* memory transactions from userspace before we return. The TLB
* flushing implied above by changing the PTE above *should* be
* sufficient, an extra barrier here just provides us with a bit
@ -422,54 +455,149 @@ out:
intel_runtime_pm_put(&i915->runtime_pm, wakeref);
}
static int create_mmap_offset(struct drm_i915_gem_object *obj)
void i915_gem_object_release_mmap_offset(struct drm_i915_gem_object *obj)
{
struct drm_i915_private *i915 = to_i915(obj->base.dev);
struct intel_gt *gt = &i915->gt;
int err;
struct i915_mmap_offset *mmo;
err = drm_gem_create_mmap_offset(&obj->base);
if (likely(!err))
return 0;
spin_lock(&obj->mmo.lock);
list_for_each_entry(mmo, &obj->mmo.offsets, offset) {
/*
* vma_node_unmap for GTT mmaps handled already in
* __i915_gem_object_release_mmap_gtt
*/
if (mmo->mmap_type == I915_MMAP_TYPE_GTT)
continue;
/* Attempt to reap some mmap space from dead objects */
err = intel_gt_retire_requests_timeout(gt, MAX_SCHEDULE_TIMEOUT);
if (err)
return err;
i915_gem_drain_freed_objects(i915);
return drm_gem_create_mmap_offset(&obj->base);
spin_unlock(&obj->mmo.lock);
drm_vma_node_unmap(&mmo->vma_node,
obj->base.dev->anon_inode->i_mapping);
spin_lock(&obj->mmo.lock);
}
spin_unlock(&obj->mmo.lock);
}
int
i915_gem_mmap_gtt(struct drm_file *file,
struct drm_device *dev,
u32 handle,
u64 *offset)
/**
* i915_gem_object_release_mmap - remove physical page mappings
* @obj: obj in question
*
* Preserve the reservation of the mmapping with the DRM core code, but
* relinquish ownership of the pages back to the system.
*/
void i915_gem_object_release_mmap(struct drm_i915_gem_object *obj)
{
i915_gem_object_release_mmap_gtt(obj);
i915_gem_object_release_mmap_offset(obj);
}
static struct i915_mmap_offset *
mmap_offset_attach(struct drm_i915_gem_object *obj,
enum i915_mmap_type mmap_type,
struct drm_file *file)
{
struct drm_i915_private *i915 = to_i915(obj->base.dev);
struct i915_mmap_offset *mmo;
int err;
mmo = kmalloc(sizeof(*mmo), GFP_KERNEL);
if (!mmo)
return ERR_PTR(-ENOMEM);
mmo->obj = obj;
mmo->dev = obj->base.dev;
mmo->file = file;
mmo->mmap_type = mmap_type;
drm_vma_node_reset(&mmo->vma_node);
err = drm_vma_offset_add(mmo->dev->vma_offset_manager, &mmo->vma_node,
obj->base.size / PAGE_SIZE);
if (likely(!err))
goto out;
/* Attempt to reap some mmap space from dead objects */
err = intel_gt_retire_requests_timeout(&i915->gt, MAX_SCHEDULE_TIMEOUT);
if (err)
goto err;
i915_gem_drain_freed_objects(i915);
err = drm_vma_offset_add(mmo->dev->vma_offset_manager, &mmo->vma_node,
obj->base.size / PAGE_SIZE);
if (err)
goto err;
out:
if (file)
drm_vma_node_allow(&mmo->vma_node, file);
spin_lock(&obj->mmo.lock);
list_add(&mmo->offset, &obj->mmo.offsets);
spin_unlock(&obj->mmo.lock);
return mmo;
err:
kfree(mmo);
return ERR_PTR(err);
}
static int
__assign_mmap_offset(struct drm_file *file,
u32 handle,
enum i915_mmap_type mmap_type,
u64 *offset)
{
struct drm_i915_gem_object *obj;
int ret;
struct i915_mmap_offset *mmo;
int err;
obj = i915_gem_object_lookup(file, handle);
if (!obj)
return -ENOENT;
if (i915_gem_object_never_bind_ggtt(obj)) {
ret = -ENODEV;
if (mmap_type == I915_MMAP_TYPE_GTT &&
i915_gem_object_never_bind_ggtt(obj)) {
err = -ENODEV;
goto out;
}
ret = create_mmap_offset(obj);
if (ret == 0)
*offset = drm_vma_node_offset_addr(&obj->base.vma_node);
if (mmap_type != I915_MMAP_TYPE_GTT &&
!i915_gem_object_has_struct_page(obj)) {
err = -ENODEV;
goto out;
}
mmo = mmap_offset_attach(obj, mmap_type, file);
if (IS_ERR(mmo)) {
err = PTR_ERR(mmo);
goto out;
}
*offset = drm_vma_node_offset_addr(&mmo->vma_node);
err = 0;
out:
i915_gem_object_put(obj);
return ret;
return err;
}
int
i915_gem_dumb_mmap_offset(struct drm_file *file,
struct drm_device *dev,
u32 handle,
u64 *offset)
{
enum i915_mmap_type mmap_type;
if (boot_cpu_has(X86_FEATURE_PAT))
mmap_type = I915_MMAP_TYPE_WC;
else if (!i915_ggtt_has_aperture(&to_i915(dev)->ggtt))
return -ENODEV;
else
mmap_type = I915_MMAP_TYPE_GTT;
return __assign_mmap_offset(file, handle, mmap_type, offset);
}
/**
* i915_gem_mmap_gtt_ioctl - prepare an object for GTT mmap'ing
* i915_gem_mmap_offset_ioctl - prepare an object for GTT mmap'ing
* @dev: DRM device
* @data: GTT mapping ioctl data
* @file: GEM object info
@ -484,12 +612,179 @@ out:
* userspace.
*/
int
i915_gem_mmap_gtt_ioctl(struct drm_device *dev, void *data,
struct drm_file *file)
i915_gem_mmap_offset_ioctl(struct drm_device *dev, void *data,
struct drm_file *file)
{
struct drm_i915_gem_mmap_gtt *args = data;
struct drm_i915_private *i915 = to_i915(dev);
struct drm_i915_gem_mmap_offset *args = data;
enum i915_mmap_type type;
int err;
return i915_gem_mmap_gtt(file, dev, args->handle, &args->offset);
/*
* Historically we failed to check args.pad and args.offset
* and so we cannot use those fields for user input and we cannot
* add -EINVAL for them as the ABI is fixed, i.e. old userspace
* may be feeding in garbage in those fields.
*
* if (args->pad) return -EINVAL; is verbotten!
*/
err = i915_user_extensions(u64_to_user_ptr(args->extensions),
NULL, 0, NULL);
if (err)
return err;
switch (args->flags) {
case I915_MMAP_OFFSET_GTT:
if (!i915_ggtt_has_aperture(&i915->ggtt))
return -ENODEV;
type = I915_MMAP_TYPE_GTT;
break;
case I915_MMAP_OFFSET_WC:
if (!boot_cpu_has(X86_FEATURE_PAT))
return -ENODEV;
type = I915_MMAP_TYPE_WC;
break;
case I915_MMAP_OFFSET_WB:
type = I915_MMAP_TYPE_WB;
break;
case I915_MMAP_OFFSET_UC:
if (!boot_cpu_has(X86_FEATURE_PAT))
return -ENODEV;
type = I915_MMAP_TYPE_UC;
break;
default:
return -EINVAL;
}
return __assign_mmap_offset(file, args->handle, type, &args->offset);
}
static void vm_open(struct vm_area_struct *vma)
{
struct i915_mmap_offset *mmo = vma->vm_private_data;
struct drm_i915_gem_object *obj = mmo->obj;
GEM_BUG_ON(!obj);
i915_gem_object_get(obj);
}
static void vm_close(struct vm_area_struct *vma)
{
struct i915_mmap_offset *mmo = vma->vm_private_data;
struct drm_i915_gem_object *obj = mmo->obj;
GEM_BUG_ON(!obj);
i915_gem_object_put(obj);
}
static const struct vm_operations_struct vm_ops_gtt = {
.fault = vm_fault_gtt,
.open = vm_open,
.close = vm_close,
};
static const struct vm_operations_struct vm_ops_cpu = {
.fault = vm_fault_cpu,
.open = vm_open,
.close = vm_close,
};
/*
* This overcomes the limitation in drm_gem_mmap's assignment of a
* drm_gem_object as the vma->vm_private_data. Since we need to
* be able to resolve multiple mmap offsets which could be tied
* to a single gem object.
*/
int i915_gem_mmap(struct file *filp, struct vm_area_struct *vma)
{
struct drm_vma_offset_node *node;
struct drm_file *priv = filp->private_data;
struct drm_device *dev = priv->minor->dev;
struct i915_mmap_offset *mmo = NULL;
struct drm_gem_object *obj = NULL;
if (drm_dev_is_unplugged(dev))
return -ENODEV;
drm_vma_offset_lock_lookup(dev->vma_offset_manager);
node = drm_vma_offset_exact_lookup_locked(dev->vma_offset_manager,
vma->vm_pgoff,
vma_pages(vma));
if (likely(node)) {
mmo = container_of(node, struct i915_mmap_offset,
vma_node);
/*
* In our dependency chain, the drm_vma_offset_node
* depends on the validity of the mmo, which depends on
* the gem object. However the only reference we have
* at this point is the mmo (as the parent of the node).
* Try to check if the gem object was at least cleared.
*/
if (!mmo || !mmo->obj) {
drm_vma_offset_unlock_lookup(dev->vma_offset_manager);
return -EINVAL;
}
/*
* Skip 0-refcnted objects as it is in the process of being
* destroyed and will be invalid when the vma manager lock
* is released.
*/
obj = &mmo->obj->base;
if (!kref_get_unless_zero(&obj->refcount))
obj = NULL;
}
drm_vma_offset_unlock_lookup(dev->vma_offset_manager);
if (!obj)
return -EINVAL;
if (!drm_vma_node_is_allowed(node, priv)) {
drm_gem_object_put_unlocked(obj);
return -EACCES;
}
if (i915_gem_object_is_readonly(to_intel_bo(obj))) {
if (vma->vm_flags & VM_WRITE) {
drm_gem_object_put_unlocked(obj);
return -EINVAL;
}
vma->vm_flags &= ~VM_MAYWRITE;
}
vma->vm_flags |= VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP;
vma->vm_private_data = mmo;
switch (mmo->mmap_type) {
case I915_MMAP_TYPE_WC:
vma->vm_page_prot =
pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
vma->vm_ops = &vm_ops_cpu;
break;
case I915_MMAP_TYPE_WB:
vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
vma->vm_ops = &vm_ops_cpu;
break;
case I915_MMAP_TYPE_UC:
vma->vm_page_prot =
pgprot_noncached(vm_get_page_prot(vma->vm_flags));
vma->vm_ops = &vm_ops_cpu;
break;
case I915_MMAP_TYPE_GTT:
vma->vm_page_prot =
pgprot_writecombine(vm_get_page_prot(vma->vm_flags));
vma->vm_ops = &vm_ops_gtt;
break;
}
vma->vm_page_prot = pgprot_decrypted(vma->vm_page_prot);
return 0;
}
#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)

Просмотреть файл

@ -0,0 +1,31 @@
/*
* SPDX-License-Identifier: MIT
*
* Copyright © 2019 Intel Corporation
*/
#ifndef __I915_GEM_MMAN_H__
#define __I915_GEM_MMAN_H__
#include <linux/mm_types.h>
#include <linux/types.h>
struct drm_device;
struct drm_file;
struct drm_i915_gem_object;
struct file;
struct i915_mmap_offset;
struct mutex;
int i915_gem_mmap_gtt_version(void);
int i915_gem_mmap(struct file *filp, struct vm_area_struct *vma);
int i915_gem_dumb_mmap_offset(struct drm_file *file_priv,
struct drm_device *dev,
u32 handle, u64 *offset);
void __i915_gem_object_release_mmap_gtt(struct drm_i915_gem_object *obj);
void i915_gem_object_release_mmap(struct drm_i915_gem_object *obj);
void i915_gem_object_release_mmap_offset(struct drm_i915_gem_object *obj);
#endif

Просмотреть файл

@ -22,11 +22,14 @@
*
*/
#include <linux/sched/mm.h>
#include "display/intel_frontbuffer.h"
#include "gt/intel_gt.h"
#include "i915_drv.h"
#include "i915_gem_clflush.h"
#include "i915_gem_context.h"
#include "i915_gem_mman.h"
#include "i915_gem_object.h"
#include "i915_globals.h"
#include "i915_trace.h"
@ -59,6 +62,9 @@ void i915_gem_object_init(struct drm_i915_gem_object *obj,
INIT_LIST_HEAD(&obj->lut_list);
spin_lock_init(&obj->mmo.lock);
INIT_LIST_HEAD(&obj->mmo.offsets);
init_rcu_head(&obj->rcu);
obj->ops = ops;
@ -95,6 +101,7 @@ void i915_gem_close_object(struct drm_gem_object *gem, struct drm_file *file)
struct drm_i915_gem_object *obj = to_intel_bo(gem);
struct drm_i915_file_private *fpriv = file->driver_priv;
struct i915_lut_handle *lut, *ln;
struct i915_mmap_offset *mmo;
LIST_HEAD(close);
i915_gem_object_lock(obj);
@ -109,6 +116,17 @@ void i915_gem_close_object(struct drm_gem_object *gem, struct drm_file *file)
}
i915_gem_object_unlock(obj);
spin_lock(&obj->mmo.lock);
list_for_each_entry(mmo, &obj->mmo.offsets, offset) {
if (mmo->file != file)
continue;
spin_unlock(&obj->mmo.lock);
drm_vma_node_revoke(&mmo->vma_node, file);
spin_lock(&obj->mmo.lock);
}
spin_unlock(&obj->mmo.lock);
list_for_each_entry_safe(lut, ln, &close, obj_link) {
struct i915_gem_context *ctx = lut->ctx;
struct i915_vma *vma;
@ -156,6 +174,8 @@ static void __i915_gem_free_objects(struct drm_i915_private *i915,
wakeref = intel_runtime_pm_get(&i915->runtime_pm);
llist_for_each_entry_safe(obj, on, freed, freed) {
struct i915_mmap_offset *mmo, *mn;
trace_i915_gem_object_destroy(obj);
if (!list_empty(&obj->vma.list)) {
@ -174,19 +194,28 @@ static void __i915_gem_free_objects(struct drm_i915_private *i915,
GEM_BUG_ON(vma->obj != obj);
spin_unlock(&obj->vma.lock);
i915_vma_destroy(vma);
__i915_vma_put(vma);
spin_lock(&obj->vma.lock);
}
spin_unlock(&obj->vma.lock);
}
i915_gem_object_release_mmap(obj);
list_for_each_entry_safe(mmo, mn, &obj->mmo.offsets, offset) {
drm_vma_offset_remove(obj->base.dev->vma_offset_manager,
&mmo->vma_node);
kfree(mmo);
}
INIT_LIST_HEAD(&obj->mmo.offsets);
GEM_BUG_ON(atomic_read(&obj->bind_count));
GEM_BUG_ON(obj->userfault_count);
GEM_BUG_ON(!list_empty(&obj->lut_list));
atomic_set(&obj->mm.pages_pin_count, 0);
__i915_gem_object_put_pages(obj, I915_MM_NORMAL);
__i915_gem_object_put_pages(obj);
GEM_BUG_ON(i915_gem_object_has_pages(obj));
bitmap_free(obj->bit_17);
@ -277,18 +306,14 @@ i915_gem_object_flush_write_domain(struct drm_i915_gem_object *obj,
switch (obj->write_domain) {
case I915_GEM_DOMAIN_GTT:
for_each_ggtt_vma(vma, obj)
intel_gt_flush_ggtt_writes(vma->vm->gt);
intel_frontbuffer_flush(obj->frontbuffer, ORIGIN_CPU);
spin_lock(&obj->vma.lock);
for_each_ggtt_vma(vma, obj) {
if (vma->iomap)
continue;
i915_vma_unset_ggtt_write(vma);
if (i915_vma_unset_ggtt_write(vma))
intel_gt_flush_ggtt_writes(vma->vm->gt);
}
spin_unlock(&obj->vma.lock);
i915_gem_object_flush_frontbuffer(obj, ORIGIN_CPU);
break;
case I915_GEM_DOMAIN_WC:
@ -308,6 +333,30 @@ i915_gem_object_flush_write_domain(struct drm_i915_gem_object *obj,
obj->write_domain = 0;
}
void __i915_gem_object_flush_frontbuffer(struct drm_i915_gem_object *obj,
enum fb_op_origin origin)
{
struct intel_frontbuffer *front;
front = __intel_frontbuffer_get(obj);
if (front) {
intel_frontbuffer_flush(front, origin);
intel_frontbuffer_put(front);
}
}
void __i915_gem_object_invalidate_frontbuffer(struct drm_i915_gem_object *obj,
enum fb_op_origin origin)
{
struct intel_frontbuffer *front;
front = __intel_frontbuffer_get(obj);
if (front) {
intel_frontbuffer_invalidate(front, origin);
intel_frontbuffer_put(front);
}
}
void i915_gem_init__objects(struct drm_i915_private *i915)
{
INIT_WORK(&i915->mm.free_work, __i915_gem_free_work);

Просмотреть файл

@ -13,8 +13,8 @@
#include <drm/i915_drm.h>
#include "display/intel_frontbuffer.h"
#include "i915_gem_object_types.h"
#include "i915_gem_gtt.h"
void i915_gem_init__objects(struct drm_i915_private *i915);
@ -132,13 +132,13 @@ void i915_gem_object_unlock_fence(struct drm_i915_gem_object *obj,
static inline void
i915_gem_object_set_readonly(struct drm_i915_gem_object *obj)
{
obj->base.vma_node.readonly = true;
obj->flags |= I915_BO_READONLY;
}
static inline bool
i915_gem_object_is_readonly(const struct drm_i915_gem_object *obj)
{
return obj->base.vma_node.readonly;
return obj->flags & I915_BO_READONLY;
}
static inline bool
@ -271,10 +271,27 @@ void __i915_gem_object_set_pages(struct drm_i915_gem_object *obj,
int ____i915_gem_object_get_pages(struct drm_i915_gem_object *obj);
int __i915_gem_object_get_pages(struct drm_i915_gem_object *obj);
enum i915_mm_subclass { /* lockdep subclass for obj->mm.lock/struct_mutex */
I915_MM_NORMAL = 0,
/*
* Only used by struct_mutex, when called "recursively" from
* direct-reclaim-esque. Safe because there is only every one
* struct_mutex in the entire system.
*/
I915_MM_SHRINKER = 1,
/*
* Used for obj->mm.lock when allocating pages. Safe because the object
* isn't yet on any LRU, and therefore the shrinker can't deadlock on
* it. As soon as the object has pages, obj->mm.lock nests within
* fs_reclaim.
*/
I915_MM_GET_PAGES = 1,
};
static inline int __must_check
i915_gem_object_pin_pages(struct drm_i915_gem_object *obj)
{
might_lock(&obj->mm.lock);
might_lock_nested(&obj->mm.lock, I915_MM_GET_PAGES);
if (atomic_inc_not_zero(&obj->mm.pages_pin_count))
return 0;
@ -317,13 +334,7 @@ i915_gem_object_unpin_pages(struct drm_i915_gem_object *obj)
__i915_gem_object_unpin_pages(obj);
}
enum i915_mm_subclass { /* lockdep subclass for obj->mm.lock/struct_mutex */
I915_MM_NORMAL = 0,
I915_MM_SHRINKER /* called "recursively" from direct-reclaim-esque */
};
int __i915_gem_object_put_pages(struct drm_i915_gem_object *obj,
enum i915_mm_subclass subclass);
int __i915_gem_object_put_pages(struct drm_i915_gem_object *obj);
void i915_gem_object_truncate(struct drm_i915_gem_object *obj);
void i915_gem_object_writeback(struct drm_i915_gem_object *obj);
@ -376,9 +387,6 @@ static inline void i915_gem_object_unpin_map(struct drm_i915_gem_object *obj)
i915_gem_object_unpin_pages(obj);
}
void __i915_gem_object_release_mmap(struct drm_i915_gem_object *obj);
void i915_gem_object_release_mmap(struct drm_i915_gem_object *obj);
void
i915_gem_object_flush_write_domain(struct drm_i915_gem_object *obj,
unsigned int flush_domains);
@ -463,4 +471,25 @@ int i915_gem_object_wait_priority(struct drm_i915_gem_object *obj,
unsigned int flags,
const struct i915_sched_attr *attr);
void __i915_gem_object_flush_frontbuffer(struct drm_i915_gem_object *obj,
enum fb_op_origin origin);
void __i915_gem_object_invalidate_frontbuffer(struct drm_i915_gem_object *obj,
enum fb_op_origin origin);
static inline void
i915_gem_object_flush_frontbuffer(struct drm_i915_gem_object *obj,
enum fb_op_origin origin)
{
if (unlikely(rcu_access_pointer(obj->frontbuffer)))
__i915_gem_object_flush_frontbuffer(obj, origin);
}
static inline void
i915_gem_object_invalidate_frontbuffer(struct drm_i915_gem_object *obj,
enum fb_op_origin origin)
{
if (unlikely(rcu_access_pointer(obj->frontbuffer)))
__i915_gem_object_invalidate_frontbuffer(obj, origin);
}
#endif

Просмотреть файл

@ -63,6 +63,23 @@ struct drm_i915_gem_object_ops {
void (*release)(struct drm_i915_gem_object *obj);
};
enum i915_mmap_type {
I915_MMAP_TYPE_GTT = 0,
I915_MMAP_TYPE_WC,
I915_MMAP_TYPE_WB,
I915_MMAP_TYPE_UC,
};
struct i915_mmap_offset {
struct drm_device *dev;
struct drm_vma_offset_node vma_node;
struct drm_i915_gem_object *obj;
struct drm_file *file;
enum i915_mmap_type mmap_type;
struct list_head offset;
};
struct drm_i915_gem_object {
struct drm_gem_object base;
@ -118,12 +135,18 @@ struct drm_i915_gem_object {
unsigned int userfault_count;
struct list_head userfault_link;
struct {
spinlock_t lock; /* Protects access to mmo offsets */
struct list_head offsets;
} mmo;
I915_SELFTEST_DECLARE(struct list_head st_link);
unsigned long flags;
#define I915_BO_ALLOC_CONTIGUOUS BIT(0)
#define I915_BO_ALLOC_VOLATILE BIT(1)
#define I915_BO_ALLOC_FLAGS (I915_BO_ALLOC_CONTIGUOUS | I915_BO_ALLOC_VOLATILE)
#define I915_BO_READONLY BIT(2)
/*
* Is the object to be mapped as read-only to the GPU
@ -150,7 +173,7 @@ struct drm_i915_gem_object {
*/
u16 write_domain;
struct intel_frontbuffer *frontbuffer;
struct intel_frontbuffer __rcu *frontbuffer;
/** Current tiling stride for the object, if it's tiled. */
unsigned int tiling_and_stride;
@ -162,7 +185,11 @@ struct drm_i915_gem_object {
atomic_t bind_count;
struct {
struct mutex lock; /* protects the pages and their use */
/*
* Protects the pages and their use. Do not use directly, but
* instead go through the pin/unpin interfaces.
*/
struct mutex lock;
atomic_t pages_pin_count;
atomic_t shrink_pin;

Просмотреть файл

@ -8,6 +8,7 @@
#include "i915_gem_object.h"
#include "i915_scatterlist.h"
#include "i915_gem_lmem.h"
#include "i915_gem_mman.h"
void __i915_gem_object_set_pages(struct drm_i915_gem_object *obj,
struct sg_table *pages,
@ -106,7 +107,7 @@ int __i915_gem_object_get_pages(struct drm_i915_gem_object *obj)
{
int err;
err = mutex_lock_interruptible(&obj->mm.lock);
err = mutex_lock_interruptible_nested(&obj->mm.lock, I915_MM_GET_PAGES);
if (err)
return err;
@ -190,8 +191,7 @@ __i915_gem_object_unset_pages(struct drm_i915_gem_object *obj)
return pages;
}
int __i915_gem_object_put_pages(struct drm_i915_gem_object *obj,
enum i915_mm_subclass subclass)
int __i915_gem_object_put_pages(struct drm_i915_gem_object *obj)
{
struct sg_table *pages;
int err;
@ -202,12 +202,14 @@ int __i915_gem_object_put_pages(struct drm_i915_gem_object *obj,
GEM_BUG_ON(atomic_read(&obj->bind_count));
/* May be called by shrinker from within get_pages() (on another bo) */
mutex_lock_nested(&obj->mm.lock, subclass);
mutex_lock(&obj->mm.lock);
if (unlikely(atomic_read(&obj->mm.pages_pin_count))) {
err = -EBUSY;
goto unlock;
}
i915_gem_object_release_mmap_offset(obj);
/*
* ->put_pages might need to allocate memory for the bit17 swizzle
* array, hence protect them from being reaped by removing them from gtt
@ -308,7 +310,7 @@ void *i915_gem_object_pin_map(struct drm_i915_gem_object *obj,
if (!i915_gem_object_type_has(obj, flags))
return ERR_PTR(-ENXIO);
err = mutex_lock_interruptible(&obj->mm.lock);
err = mutex_lock_interruptible_nested(&obj->mm.lock, I915_MM_GET_PAGES);
if (err)
return ERR_PTR(err);

Просмотреть файл

@ -164,7 +164,7 @@ int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align)
if (err)
return err;
mutex_lock(&obj->mm.lock);
mutex_lock_nested(&obj->mm.lock, I915_MM_GET_PAGES);
if (obj->mm.madv != I915_MADV_WILLNEED) {
err = -EFAULT;

Просмотреть файл

@ -13,7 +13,7 @@
void i915_gem_suspend(struct drm_i915_private *i915)
{
GEM_TRACE("\n");
GEM_TRACE("%s\n", dev_name(i915->drm.dev));
intel_wakeref_auto(&i915->ggtt.userfault_wakeref, 0);
flush_workqueue(i915->wq);
@ -99,30 +99,12 @@ void i915_gem_suspend_late(struct drm_i915_private *i915)
void i915_gem_resume(struct drm_i915_private *i915)
{
GEM_TRACE("\n");
intel_uncore_forcewake_get(&i915->uncore, FORCEWAKE_ALL);
if (intel_gt_init_hw(&i915->gt))
goto err_wedged;
GEM_TRACE("%s\n", dev_name(i915->drm.dev));
/*
* As we didn't flush the kernel context before suspend, we cannot
* guarantee that the context image is complete. So let's just reset
* it and start again.
*/
if (intel_gt_resume(&i915->gt))
goto err_wedged;
out_unlock:
intel_uncore_forcewake_put(&i915->uncore, FORCEWAKE_ALL);
return;
err_wedged:
if (!intel_gt_is_wedged(&i915->gt)) {
dev_err(i915->drm.dev,
"Failed to re-initialize GPU, declaring it wedged!\n");
intel_gt_set_wedged(&i915->gt);
}
goto out_unlock;
intel_gt_resume(&i915->gt);
}

Просмотреть файл

@ -85,7 +85,7 @@ i915_gem_object_get_pages_buddy(struct drm_i915_gem_object *obj)
}
prev_end = offset + block_size;
};
}
sg_page_sizes |= sg->length;
sg_mark_end(sg);

Просмотреть файл

@ -57,7 +57,7 @@ static bool unsafe_drop_pages(struct drm_i915_gem_object *obj,
flags = I915_GEM_OBJECT_UNBIND_ACTIVE;
if (i915_gem_object_unbind(obj, flags) == 0)
__i915_gem_object_put_pages(obj, I915_MM_SHRINKER);
__i915_gem_object_put_pages(obj);
return !i915_gem_object_has_pages(obj);
}
@ -209,8 +209,7 @@ i915_gem_shrink(struct drm_i915_private *i915,
if (unsafe_drop_pages(obj, shrink)) {
/* May arrive from get_pages on another bo */
mutex_lock_nested(&obj->mm.lock,
I915_MM_SHRINKER);
mutex_lock(&obj->mm.lock);
if (!i915_gem_object_has_pages(obj)) {
try_to_writeback(obj, shrink);
count += obj->base.size >> PAGE_SHIFT;

Просмотреть файл

@ -26,48 +26,49 @@
* for is a boon.
*/
int i915_gem_stolen_insert_node_in_range(struct drm_i915_private *dev_priv,
int i915_gem_stolen_insert_node_in_range(struct drm_i915_private *i915,
struct drm_mm_node *node, u64 size,
unsigned alignment, u64 start, u64 end)
{
int ret;
if (!drm_mm_initialized(&dev_priv->mm.stolen))
if (!drm_mm_initialized(&i915->mm.stolen))
return -ENODEV;
/* WaSkipStolenMemoryFirstPage:bdw+ */
if (INTEL_GEN(dev_priv) >= 8 && start < 4096)
if (INTEL_GEN(i915) >= 8 && start < 4096)
start = 4096;
mutex_lock(&dev_priv->mm.stolen_lock);
ret = drm_mm_insert_node_in_range(&dev_priv->mm.stolen, node,
mutex_lock(&i915->mm.stolen_lock);
ret = drm_mm_insert_node_in_range(&i915->mm.stolen, node,
size, alignment, 0,
start, end, DRM_MM_INSERT_BEST);
mutex_unlock(&dev_priv->mm.stolen_lock);
mutex_unlock(&i915->mm.stolen_lock);
return ret;
}
int i915_gem_stolen_insert_node(struct drm_i915_private *dev_priv,
int i915_gem_stolen_insert_node(struct drm_i915_private *i915,
struct drm_mm_node *node, u64 size,
unsigned alignment)
{
return i915_gem_stolen_insert_node_in_range(dev_priv, node, size,
return i915_gem_stolen_insert_node_in_range(i915, node, size,
alignment, 0, U64_MAX);
}
void i915_gem_stolen_remove_node(struct drm_i915_private *dev_priv,
void i915_gem_stolen_remove_node(struct drm_i915_private *i915,
struct drm_mm_node *node)
{
mutex_lock(&dev_priv->mm.stolen_lock);
mutex_lock(&i915->mm.stolen_lock);
drm_mm_remove_node(node);
mutex_unlock(&dev_priv->mm.stolen_lock);
mutex_unlock(&i915->mm.stolen_lock);
}
static int i915_adjust_stolen(struct drm_i915_private *dev_priv,
static int i915_adjust_stolen(struct drm_i915_private *i915,
struct resource *dsm)
{
struct i915_ggtt *ggtt = &dev_priv->ggtt;
struct i915_ggtt *ggtt = &i915->ggtt;
struct intel_uncore *uncore = ggtt->vm.gt->uncore;
struct resource *r;
if (dsm->start == 0 || dsm->end <= dsm->start)
@ -79,14 +80,14 @@ static int i915_adjust_stolen(struct drm_i915_private *dev_priv,
*/
/* Make sure we don't clobber the GTT if it's within stolen memory */
if (INTEL_GEN(dev_priv) <= 4 &&
!IS_G33(dev_priv) && !IS_PINEVIEW(dev_priv) && !IS_G4X(dev_priv)) {
if (INTEL_GEN(i915) <= 4 &&
!IS_G33(i915) && !IS_PINEVIEW(i915) && !IS_G4X(i915)) {
struct resource stolen[2] = {*dsm, *dsm};
struct resource ggtt_res;
resource_size_t ggtt_start;
ggtt_start = I915_READ(PGTBL_CTL);
if (IS_GEN(dev_priv, 4))
ggtt_start = intel_uncore_read(uncore, PGTBL_CTL);
if (IS_GEN(i915, 4))
ggtt_start = (ggtt_start & PGTBL_ADDRESS_LO_MASK) |
(ggtt_start & PGTBL_ADDRESS_HI_MASK) << 28;
else
@ -120,7 +121,7 @@ static int i915_adjust_stolen(struct drm_i915_private *dev_priv,
* kernel. So if the region is already marked as busy, something
* is seriously wrong.
*/
r = devm_request_mem_region(dev_priv->drm.dev, dsm->start,
r = devm_request_mem_region(i915->drm.dev, dsm->start,
resource_size(dsm),
"Graphics Stolen Memory");
if (r == NULL) {
@ -133,14 +134,14 @@ static int i915_adjust_stolen(struct drm_i915_private *dev_priv,
* reservation starting from 1 instead of 0.
* There's also BIOS with off-by-one on the other end.
*/
r = devm_request_mem_region(dev_priv->drm.dev, dsm->start + 1,
r = devm_request_mem_region(i915->drm.dev, dsm->start + 1,
resource_size(dsm) - 2,
"Graphics Stolen Memory");
/*
* GEN3 firmware likes to smash pci bridges into the stolen
* range. Apparently this works.
*/
if (r == NULL && !IS_GEN(dev_priv, 3)) {
if (!r && !IS_GEN(i915, 3)) {
DRM_ERROR("conflict detected with stolen region: %pR\n",
dsm);
@ -151,25 +152,27 @@ static int i915_adjust_stolen(struct drm_i915_private *dev_priv,
return 0;
}
static void i915_gem_cleanup_stolen(struct drm_i915_private *dev_priv)
static void i915_gem_cleanup_stolen(struct drm_i915_private *i915)
{
if (!drm_mm_initialized(&dev_priv->mm.stolen))
if (!drm_mm_initialized(&i915->mm.stolen))
return;
drm_mm_takedown(&dev_priv->mm.stolen);
drm_mm_takedown(&i915->mm.stolen);
}
static void g4x_get_stolen_reserved(struct drm_i915_private *dev_priv,
static void g4x_get_stolen_reserved(struct drm_i915_private *i915,
struct intel_uncore *uncore,
resource_size_t *base,
resource_size_t *size)
{
u32 reg_val = I915_READ(IS_GM45(dev_priv) ?
CTG_STOLEN_RESERVED :
ELK_STOLEN_RESERVED);
resource_size_t stolen_top = dev_priv->dsm.end + 1;
u32 reg_val = intel_uncore_read(uncore,
IS_GM45(i915) ?
CTG_STOLEN_RESERVED :
ELK_STOLEN_RESERVED);
resource_size_t stolen_top = i915->dsm.end + 1;
DRM_DEBUG_DRIVER("%s_STOLEN_RESERVED = %08x\n",
IS_GM45(dev_priv) ? "CTG" : "ELK", reg_val);
IS_GM45(i915) ? "CTG" : "ELK", reg_val);
if ((reg_val & G4X_STOLEN_RESERVED_ENABLE) == 0)
return;
@ -178,7 +181,7 @@ static void g4x_get_stolen_reserved(struct drm_i915_private *dev_priv,
* Whether ILK really reuses the ELK register for this is unclear.
* Let's see if we catch anyone with this supposedly enabled on ILK.
*/
WARN(IS_GEN(dev_priv, 5), "ILK stolen reserved found? 0x%08x\n",
WARN(IS_GEN(i915, 5), "ILK stolen reserved found? 0x%08x\n",
reg_val);
if (!(reg_val & G4X_STOLEN_RESERVED_ADDR2_MASK))
@ -190,11 +193,12 @@ static void g4x_get_stolen_reserved(struct drm_i915_private *dev_priv,
*size = stolen_top - *base;
}
static void gen6_get_stolen_reserved(struct drm_i915_private *dev_priv,
static void gen6_get_stolen_reserved(struct drm_i915_private *i915,
struct intel_uncore *uncore,
resource_size_t *base,
resource_size_t *size)
{
u32 reg_val = I915_READ(GEN6_STOLEN_RESERVED);
u32 reg_val = intel_uncore_read(uncore, GEN6_STOLEN_RESERVED);
DRM_DEBUG_DRIVER("GEN6_STOLEN_RESERVED = %08x\n", reg_val);
@ -222,12 +226,13 @@ static void gen6_get_stolen_reserved(struct drm_i915_private *dev_priv,
}
}
static void vlv_get_stolen_reserved(struct drm_i915_private *dev_priv,
static void vlv_get_stolen_reserved(struct drm_i915_private *i915,
struct intel_uncore *uncore,
resource_size_t *base,
resource_size_t *size)
{
u32 reg_val = I915_READ(GEN6_STOLEN_RESERVED);
resource_size_t stolen_top = dev_priv->dsm.end + 1;
u32 reg_val = intel_uncore_read(uncore, GEN6_STOLEN_RESERVED);
resource_size_t stolen_top = i915->dsm.end + 1;
DRM_DEBUG_DRIVER("GEN6_STOLEN_RESERVED = %08x\n", reg_val);
@ -250,11 +255,12 @@ static void vlv_get_stolen_reserved(struct drm_i915_private *dev_priv,
*base = stolen_top - *size;
}
static void gen7_get_stolen_reserved(struct drm_i915_private *dev_priv,
static void gen7_get_stolen_reserved(struct drm_i915_private *i915,
struct intel_uncore *uncore,
resource_size_t *base,
resource_size_t *size)
{
u32 reg_val = I915_READ(GEN6_STOLEN_RESERVED);
u32 reg_val = intel_uncore_read(uncore, GEN6_STOLEN_RESERVED);
DRM_DEBUG_DRIVER("GEN6_STOLEN_RESERVED = %08x\n", reg_val);
@ -276,11 +282,12 @@ static void gen7_get_stolen_reserved(struct drm_i915_private *dev_priv,
}
}
static void chv_get_stolen_reserved(struct drm_i915_private *dev_priv,
static void chv_get_stolen_reserved(struct drm_i915_private *i915,
struct intel_uncore *uncore,
resource_size_t *base,
resource_size_t *size)
{
u32 reg_val = I915_READ(GEN6_STOLEN_RESERVED);
u32 reg_val = intel_uncore_read(uncore, GEN6_STOLEN_RESERVED);
DRM_DEBUG_DRIVER("GEN6_STOLEN_RESERVED = %08x\n", reg_val);
@ -308,12 +315,13 @@ static void chv_get_stolen_reserved(struct drm_i915_private *dev_priv,
}
}
static void bdw_get_stolen_reserved(struct drm_i915_private *dev_priv,
static void bdw_get_stolen_reserved(struct drm_i915_private *i915,
struct intel_uncore *uncore,
resource_size_t *base,
resource_size_t *size)
{
u32 reg_val = I915_READ(GEN6_STOLEN_RESERVED);
resource_size_t stolen_top = dev_priv->dsm.end + 1;
u32 reg_val = intel_uncore_read(uncore, GEN6_STOLEN_RESERVED);
resource_size_t stolen_top = i915->dsm.end + 1;
DRM_DEBUG_DRIVER("GEN6_STOLEN_RESERVED = %08x\n", reg_val);
@ -328,10 +336,11 @@ static void bdw_get_stolen_reserved(struct drm_i915_private *dev_priv,
}
static void icl_get_stolen_reserved(struct drm_i915_private *i915,
struct intel_uncore *uncore,
resource_size_t *base,
resource_size_t *size)
{
u64 reg_val = intel_uncore_read64(&i915->uncore, GEN6_STOLEN_RESERVED);
u64 reg_val = intel_uncore_read64(uncore, GEN6_STOLEN_RESERVED);
DRM_DEBUG_DRIVER("GEN6_STOLEN_RESERVED = 0x%016llx\n", reg_val);
@ -356,22 +365,23 @@ static void icl_get_stolen_reserved(struct drm_i915_private *i915,
}
}
static int i915_gem_init_stolen(struct drm_i915_private *dev_priv)
static int i915_gem_init_stolen(struct drm_i915_private *i915)
{
struct intel_uncore *uncore = &i915->uncore;
resource_size_t reserved_base, stolen_top;
resource_size_t reserved_total, reserved_size;
mutex_init(&dev_priv->mm.stolen_lock);
mutex_init(&i915->mm.stolen_lock);
if (intel_vgpu_active(dev_priv)) {
dev_notice(dev_priv->drm.dev,
if (intel_vgpu_active(i915)) {
dev_notice(i915->drm.dev,
"%s, disabling use of stolen memory\n",
"iGVT-g active");
return 0;
}
if (intel_vtd_active() && INTEL_GEN(dev_priv) < 8) {
dev_notice(dev_priv->drm.dev,
if (intel_vtd_active() && INTEL_GEN(i915) < 8) {
dev_notice(i915->drm.dev,
"%s, disabling use of stolen memory\n",
"DMAR active");
return 0;
@ -380,58 +390,59 @@ static int i915_gem_init_stolen(struct drm_i915_private *dev_priv)
if (resource_size(&intel_graphics_stolen_res) == 0)
return 0;
dev_priv->dsm = intel_graphics_stolen_res;
i915->dsm = intel_graphics_stolen_res;
if (i915_adjust_stolen(dev_priv, &dev_priv->dsm))
if (i915_adjust_stolen(i915, &i915->dsm))
return 0;
GEM_BUG_ON(dev_priv->dsm.start == 0);
GEM_BUG_ON(dev_priv->dsm.end <= dev_priv->dsm.start);
GEM_BUG_ON(i915->dsm.start == 0);
GEM_BUG_ON(i915->dsm.end <= i915->dsm.start);
stolen_top = dev_priv->dsm.end + 1;
stolen_top = i915->dsm.end + 1;
reserved_base = stolen_top;
reserved_size = 0;
switch (INTEL_GEN(dev_priv)) {
switch (INTEL_GEN(i915)) {
case 2:
case 3:
break;
case 4:
if (!IS_G4X(dev_priv))
if (!IS_G4X(i915))
break;
/* fall through */
case 5:
g4x_get_stolen_reserved(dev_priv,
g4x_get_stolen_reserved(i915, uncore,
&reserved_base, &reserved_size);
break;
case 6:
gen6_get_stolen_reserved(dev_priv,
gen6_get_stolen_reserved(i915, uncore,
&reserved_base, &reserved_size);
break;
case 7:
if (IS_VALLEYVIEW(dev_priv))
vlv_get_stolen_reserved(dev_priv,
if (IS_VALLEYVIEW(i915))
vlv_get_stolen_reserved(i915, uncore,
&reserved_base, &reserved_size);
else
gen7_get_stolen_reserved(dev_priv,
gen7_get_stolen_reserved(i915, uncore,
&reserved_base, &reserved_size);
break;
case 8:
case 9:
case 10:
if (IS_LP(dev_priv))
chv_get_stolen_reserved(dev_priv,
if (IS_LP(i915))
chv_get_stolen_reserved(i915, uncore,
&reserved_base, &reserved_size);
else
bdw_get_stolen_reserved(dev_priv,
bdw_get_stolen_reserved(i915, uncore,
&reserved_base, &reserved_size);
break;
default:
MISSING_CASE(INTEL_GEN(dev_priv));
MISSING_CASE(INTEL_GEN(i915));
/* fall-through */
case 11:
case 12:
icl_get_stolen_reserved(dev_priv, &reserved_base,
icl_get_stolen_reserved(i915, uncore,
&reserved_base,
&reserved_size);
break;
}
@ -448,12 +459,12 @@ static int i915_gem_init_stolen(struct drm_i915_private *dev_priv)
reserved_size = 0;
}
dev_priv->dsm_reserved =
(struct resource) DEFINE_RES_MEM(reserved_base, reserved_size);
i915->dsm_reserved =
(struct resource)DEFINE_RES_MEM(reserved_base, reserved_size);
if (!resource_contains(&dev_priv->dsm, &dev_priv->dsm_reserved)) {
if (!resource_contains(&i915->dsm, &i915->dsm_reserved)) {
DRM_ERROR("Stolen reserved area %pR outside stolen memory %pR\n",
&dev_priv->dsm_reserved, &dev_priv->dsm);
&i915->dsm_reserved, &i915->dsm);
return 0;
}
@ -462,14 +473,14 @@ static int i915_gem_init_stolen(struct drm_i915_private *dev_priv)
reserved_total = stolen_top - reserved_base;
DRM_DEBUG_DRIVER("Memory reserved for graphics device: %lluK, usable: %lluK\n",
(u64)resource_size(&dev_priv->dsm) >> 10,
((u64)resource_size(&dev_priv->dsm) - reserved_total) >> 10);
(u64)resource_size(&i915->dsm) >> 10,
((u64)resource_size(&i915->dsm) - reserved_total) >> 10);
dev_priv->stolen_usable_size =
resource_size(&dev_priv->dsm) - reserved_total;
i915->stolen_usable_size =
resource_size(&i915->dsm) - reserved_total;
/* Basic memrange allocator for stolen space. */
drm_mm_init(&dev_priv->mm.stolen, 0, dev_priv->stolen_usable_size);
drm_mm_init(&i915->mm.stolen, 0, i915->stolen_usable_size);
return 0;
}
@ -478,11 +489,11 @@ static struct sg_table *
i915_pages_create_for_stolen(struct drm_device *dev,
resource_size_t offset, resource_size_t size)
{
struct drm_i915_private *dev_priv = to_i915(dev);
struct drm_i915_private *i915 = to_i915(dev);
struct sg_table *st;
struct scatterlist *sg;
GEM_BUG_ON(range_overflows(offset, size, resource_size(&dev_priv->dsm)));
GEM_BUG_ON(range_overflows(offset, size, resource_size(&i915->dsm)));
/* We hide that we have no struct page backing our stolen object
* by wrapping the contiguous physical allocation with a fake
@ -502,7 +513,7 @@ i915_pages_create_for_stolen(struct drm_device *dev,
sg->offset = 0;
sg->length = size;
sg_dma_address(sg) = (dma_addr_t)dev_priv->dsm.start + offset;
sg_dma_address(sg) = (dma_addr_t)i915->dsm.start + offset;
sg_dma_len(sg) = size;
return st;
@ -533,16 +544,15 @@ static void i915_gem_object_put_pages_stolen(struct drm_i915_gem_object *obj,
static void
i915_gem_object_release_stolen(struct drm_i915_gem_object *obj)
{
struct drm_i915_private *dev_priv = to_i915(obj->base.dev);
struct drm_i915_private *i915 = to_i915(obj->base.dev);
struct drm_mm_node *stolen = fetch_and_zero(&obj->stolen);
GEM_BUG_ON(!stolen);
i915_gem_stolen_remove_node(dev_priv, stolen);
kfree(stolen);
i915_gem_object_release_memory_region(obj);
if (obj->mm.region)
i915_gem_object_release_memory_region(obj);
i915_gem_stolen_remove_node(i915, stolen);
kfree(stolen);
}
static const struct drm_i915_gem_object_ops i915_gem_object_stolen_ops = {
@ -552,9 +562,8 @@ static const struct drm_i915_gem_object_ops i915_gem_object_stolen_ops = {
};
static struct drm_i915_gem_object *
__i915_gem_object_create_stolen(struct drm_i915_private *dev_priv,
struct drm_mm_node *stolen,
struct intel_memory_region *mem)
__i915_gem_object_create_stolen(struct intel_memory_region *mem,
struct drm_mm_node *stolen)
{
static struct lock_class_key lock_class;
struct drm_i915_gem_object *obj;
@ -565,20 +574,19 @@ __i915_gem_object_create_stolen(struct drm_i915_private *dev_priv,
if (!obj)
goto err;
drm_gem_private_object_init(&dev_priv->drm, &obj->base, stolen->size);
drm_gem_private_object_init(&mem->i915->drm, &obj->base, stolen->size);
i915_gem_object_init(obj, &i915_gem_object_stolen_ops, &lock_class);
obj->stolen = stolen;
obj->read_domains = I915_GEM_DOMAIN_CPU | I915_GEM_DOMAIN_GTT;
cache_level = HAS_LLC(dev_priv) ? I915_CACHE_LLC : I915_CACHE_NONE;
cache_level = HAS_LLC(mem->i915) ? I915_CACHE_LLC : I915_CACHE_NONE;
i915_gem_object_set_cache_coherency(obj, cache_level);
err = i915_gem_object_pin_pages(obj);
if (err)
goto cleanup;
if (mem)
i915_gem_object_init_memory_region(obj, mem, 0);
i915_gem_object_init_memory_region(obj, mem, 0);
return obj;
@ -593,12 +601,12 @@ _i915_gem_object_create_stolen(struct intel_memory_region *mem,
resource_size_t size,
unsigned int flags)
{
struct drm_i915_private *dev_priv = mem->i915;
struct drm_i915_private *i915 = mem->i915;
struct drm_i915_gem_object *obj;
struct drm_mm_node *stolen;
int ret;
if (!drm_mm_initialized(&dev_priv->mm.stolen))
if (!drm_mm_initialized(&i915->mm.stolen))
return ERR_PTR(-ENODEV);
if (size == 0)
@ -608,30 +616,30 @@ _i915_gem_object_create_stolen(struct intel_memory_region *mem,
if (!stolen)
return ERR_PTR(-ENOMEM);
ret = i915_gem_stolen_insert_node(dev_priv, stolen, size, 4096);
ret = i915_gem_stolen_insert_node(i915, stolen, size, 4096);
if (ret) {
obj = ERR_PTR(ret);
goto err_free;
}
obj = __i915_gem_object_create_stolen(dev_priv, stolen, mem);
obj = __i915_gem_object_create_stolen(mem, stolen);
if (IS_ERR(obj))
goto err_remove;
return obj;
err_remove:
i915_gem_stolen_remove_node(dev_priv, stolen);
i915_gem_stolen_remove_node(i915, stolen);
err_free:
kfree(stolen);
return obj;
}
struct drm_i915_gem_object *
i915_gem_object_create_stolen(struct drm_i915_private *dev_priv,
i915_gem_object_create_stolen(struct drm_i915_private *i915,
resource_size_t size)
{
return i915_gem_object_create_region(dev_priv->mm.regions[INTEL_REGION_STOLEN],
return i915_gem_object_create_region(i915->mm.regions[INTEL_REGION_STOLEN],
size, I915_BO_ALLOC_CONTIGUOUS);
}
@ -665,18 +673,19 @@ struct intel_memory_region *i915_gem_stolen_setup(struct drm_i915_private *i915)
}
struct drm_i915_gem_object *
i915_gem_object_create_stolen_for_preallocated(struct drm_i915_private *dev_priv,
i915_gem_object_create_stolen_for_preallocated(struct drm_i915_private *i915,
resource_size_t stolen_offset,
resource_size_t gtt_offset,
resource_size_t size)
{
struct i915_ggtt *ggtt = &dev_priv->ggtt;
struct intel_memory_region *mem = i915->mm.regions[INTEL_REGION_STOLEN];
struct i915_ggtt *ggtt = &i915->ggtt;
struct drm_i915_gem_object *obj;
struct drm_mm_node *stolen;
struct i915_vma *vma;
int ret;
if (!drm_mm_initialized(&dev_priv->mm.stolen))
if (!drm_mm_initialized(&i915->mm.stolen))
return ERR_PTR(-ENODEV);
DRM_DEBUG_DRIVER("creating preallocated stolen object: stolen_offset=%pa, gtt_offset=%pa, size=%pa\n",
@ -694,19 +703,19 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_i915_private *dev_priv
stolen->start = stolen_offset;
stolen->size = size;
mutex_lock(&dev_priv->mm.stolen_lock);
ret = drm_mm_reserve_node(&dev_priv->mm.stolen, stolen);
mutex_unlock(&dev_priv->mm.stolen_lock);
mutex_lock(&i915->mm.stolen_lock);
ret = drm_mm_reserve_node(&i915->mm.stolen, stolen);
mutex_unlock(&i915->mm.stolen_lock);
if (ret) {
DRM_DEBUG_DRIVER("failed to allocate stolen space\n");
kfree(stolen);
return ERR_PTR(ret);
}
obj = __i915_gem_object_create_stolen(dev_priv, stolen, NULL);
obj = __i915_gem_object_create_stolen(mem, stolen);
if (IS_ERR(obj)) {
DRM_DEBUG_DRIVER("failed to allocate stolen object\n");
i915_gem_stolen_remove_node(dev_priv, stolen);
i915_gem_stolen_remove_node(i915, stolen);
kfree(stolen);
return obj;
}

Просмотреть файл

@ -11,6 +11,7 @@
#include "i915_drv.h"
#include "i915_gem.h"
#include "i915_gem_ioctls.h"
#include "i915_gem_mman.h"
#include "i915_gem_object.h"
/**

Просмотреть файл

@ -129,9 +129,10 @@ userptr_mn_invalidate_range_start(struct mmu_notifier *_mn,
spin_unlock(&mn->lock);
ret = i915_gem_object_unbind(obj,
I915_GEM_OBJECT_UNBIND_ACTIVE);
I915_GEM_OBJECT_UNBIND_ACTIVE |
I915_GEM_OBJECT_UNBIND_BARRIER);
if (ret == 0)
ret = __i915_gem_object_put_pages(obj, I915_MM_SHRINKER);
ret = __i915_gem_object_put_pages(obj);
i915_gem_object_put(obj);
if (ret)
return ret;
@ -459,31 +460,36 @@ __i915_gem_userptr_get_pages_worker(struct work_struct *_work)
if (pvec != NULL) {
struct mm_struct *mm = obj->userptr.mm->mm;
unsigned int flags = 0;
int locked = 0;
if (!i915_gem_object_is_readonly(obj))
flags |= FOLL_WRITE;
ret = -EFAULT;
if (mmget_not_zero(mm)) {
down_read(&mm->mmap_sem);
while (pinned < npages) {
if (!locked) {
down_read(&mm->mmap_sem);
locked = 1;
}
ret = get_user_pages_remote
(work->task, mm,
obj->userptr.ptr + pinned * PAGE_SIZE,
npages - pinned,
flags,
pvec + pinned, NULL, NULL);
pvec + pinned, NULL, &locked);
if (ret < 0)
break;
pinned += ret;
}
up_read(&mm->mmap_sem);
if (locked)
up_read(&mm->mmap_sem);
mmput(mm);
}
}
mutex_lock(&obj->mm.lock);
mutex_lock_nested(&obj->mm.lock, I915_MM_GET_PAGES);
if (obj->userptr.work == &work->work) {
struct sg_table *pages = ERR_PTR(ret);
@ -773,15 +779,11 @@ i915_gem_userptr_ioctl(struct drm_device *dev,
return -EFAULT;
if (args->flags & I915_USERPTR_READ_ONLY) {
struct i915_address_space *vm;
/*
* On almost all of the older hw, we cannot tell the GPU that
* a page is readonly.
*/
vm = rcu_dereference_protected(dev_priv->kernel_context->vm,
true); /* static vm */
if (!vm || !vm->has_read_only)
if (!dev_priv->gt.vm->has_read_only)
return -ENODEV;
}

Просмотреть файл

@ -12,10 +12,14 @@ static void huge_free_pages(struct drm_i915_gem_object *obj,
struct sg_table *pages)
{
unsigned long nreal = obj->scratch / PAGE_SIZE;
struct scatterlist *sg;
struct sgt_iter sgt_iter;
struct page *page;
for (sg = pages->sgl; sg && nreal--; sg = __sg_next(sg))
__free_page(sg_page(sg));
for_each_sgt_page(page, sgt_iter, pages) {
__free_page(page);
if (!--nreal)
break;
}
sg_free_table(pages);
kfree(pages);
@ -70,7 +74,6 @@ static int huge_get_pages(struct drm_i915_gem_object *obj)
err:
huge_free_pages(obj, pages);
return -ENOMEM;
#undef GFP
}

Просмотреть файл

@ -517,7 +517,7 @@ static int igt_mock_memory_region_huge_pages(void *arg)
i915_vma_unpin(vma);
i915_vma_close(vma);
__i915_gem_object_put_pages(obj, I915_MM_NORMAL);
__i915_gem_object_put_pages(obj);
i915_gem_object_put(obj);
}
}
@ -650,7 +650,7 @@ static int igt_mock_ppgtt_misaligned_dma(void *arg)
i915_vma_close(vma);
i915_gem_object_unpin_pages(obj);
__i915_gem_object_put_pages(obj, I915_MM_NORMAL);
__i915_gem_object_put_pages(obj);
i915_gem_object_put(obj);
}
@ -678,7 +678,7 @@ static void close_object_list(struct list_head *objects,
list_del(&obj->st_link);
i915_gem_object_unpin_pages(obj);
__i915_gem_object_put_pages(obj, I915_MM_NORMAL);
__i915_gem_object_put_pages(obj);
i915_gem_object_put(obj);
}
}
@ -948,7 +948,7 @@ static int igt_mock_ppgtt_64K(void *arg)
i915_vma_close(vma);
i915_gem_object_unpin_pages(obj);
__i915_gem_object_put_pages(obj, I915_MM_NORMAL);
__i915_gem_object_put_pages(obj);
i915_gem_object_put(obj);
}
}
@ -1110,8 +1110,7 @@ static int __igt_write_huge(struct intel_context *ce,
out_vma_unpin:
i915_vma_unpin(vma);
out_vma_close:
i915_vma_destroy(vma);
__i915_vma_put(vma);
return err;
}
@ -1301,7 +1300,7 @@ static int igt_ppgtt_exhaust_huge(void *arg)
}
i915_gem_object_unpin_pages(obj);
__i915_gem_object_put_pages(obj, I915_MM_NORMAL);
__i915_gem_object_put_pages(obj);
i915_gem_object_put(obj);
}
}
@ -1420,7 +1419,7 @@ try_again:
err = i915_gem_object_pin_pages(obj);
if (err) {
if (err == -ENXIO) {
if (err == -ENXIO || err == -E2BIG) {
i915_gem_object_put(obj);
size >>= 1;
goto try_again;
@ -1442,7 +1441,7 @@ try_again:
}
out_unpin:
i915_gem_object_unpin_pages(obj);
__i915_gem_object_put_pages(obj, I915_MM_NORMAL);
__i915_gem_object_put_pages(obj);
out_put:
i915_gem_object_put(obj);
@ -1530,7 +1529,7 @@ static int igt_ppgtt_sanity_check(void *arg)
err = igt_write_huge(ctx, obj);
i915_gem_object_unpin_pages(obj);
__i915_gem_object_put_pages(obj, I915_MM_NORMAL);
__i915_gem_object_put_pages(obj);
i915_gem_object_put(obj);
if (err) {
@ -1912,9 +1911,9 @@ int i915_gem_huge_page_live_selftests(struct drm_i915_private *i915)
SUBTEST(igt_ppgtt_smoke_huge),
SUBTEST(igt_ppgtt_sanity_check),
};
struct drm_file *file;
struct i915_gem_context *ctx;
struct i915_address_space *vm;
struct file *file;
int err;
if (!HAS_PPGTT(i915)) {
@ -1944,6 +1943,6 @@ int i915_gem_huge_page_live_selftests(struct drm_i915_private *i915)
err = i915_subtests(tests, ctx);
out_file:
mock_file_free(i915, file);
fput(file);
return err;
}

Просмотреть файл

@ -24,6 +24,7 @@ static int __igt_client_fill(struct intel_engine_cs *engine)
prandom_seed_state(&prng, i915_selftest.random_seed);
intel_engine_pm_get(engine);
do {
const u32 max_block_size = S16_MAX * PAGE_SIZE;
u32 sz = min_t(u64, ce->vm->total >> 4, prandom_u32_state(&prng));
@ -99,6 +100,7 @@ err_put:
err_flush:
if (err == -ENOMEM)
err = 0;
intel_engine_pm_put(engine);
return err;
}

Просмотреть файл

@ -6,6 +6,7 @@
#include <linux/prime_numbers.h>
#include "gt/intel_engine_pm.h"
#include "gt/intel_gt.h"
#include "gt/intel_gt_pm.h"
#include "gt/intel_ring.h"
@ -200,7 +201,7 @@ static int gpu_set(struct context *ctx, unsigned long offset, u32 v)
if (IS_ERR(vma))
return PTR_ERR(vma);
rq = i915_request_create(ctx->engine->kernel_context);
rq = intel_engine_create_kernel_request(ctx->engine);
if (IS_ERR(rq)) {
i915_vma_unpin(vma);
return PTR_ERR(rq);
@ -326,6 +327,7 @@ static int igt_gem_coherency(void *arg)
ctx.engine = random_engine(i915, &prng);
GEM_BUG_ON(!ctx.engine);
pr_info("%s: using %s\n", __func__, ctx.engine->name);
intel_engine_pm_get(ctx.engine);
for (over = igt_coherency_mode; over->name; over++) {
if (!over->set)
@ -404,6 +406,7 @@ static int igt_gem_coherency(void *arg)
}
}
free:
intel_engine_pm_put(ctx.engine);
kfree(offsets);
return err;

Просмотреть файл

@ -7,6 +7,7 @@
#include <linux/prime_numbers.h>
#include "gem/i915_gem_pm.h"
#include "gt/intel_engine_pm.h"
#include "gt/intel_gt.h"
#include "gt/intel_gt_requests.h"
#include "gt/intel_reset.h"
@ -26,6 +27,12 @@
#define DW_PER_PAGE (PAGE_SIZE / sizeof(u32))
static inline struct i915_address_space *ctx_vm(struct i915_gem_context *ctx)
{
/* single threaded, private ctx */
return rcu_dereference_protected(ctx->vm, true);
}
static int live_nop_switch(void *arg)
{
const unsigned int nctx = 1024;
@ -33,7 +40,7 @@ static int live_nop_switch(void *arg)
struct intel_engine_cs *engine;
struct i915_gem_context **ctx;
struct igt_live_test t;
struct drm_file *file;
struct file *file;
unsigned long n;
int err = -ENODEV;
@ -67,25 +74,34 @@ static int live_nop_switch(void *arg)
}
for_each_uabi_engine(engine, i915) {
struct i915_request *rq;
struct i915_request *rq = NULL;
unsigned long end_time, prime;
ktime_t times[2] = {};
times[0] = ktime_get_raw();
for (n = 0; n < nctx; n++) {
rq = igt_request_alloc(ctx[n], engine);
if (IS_ERR(rq)) {
err = PTR_ERR(rq);
struct i915_request *this;
this = igt_request_alloc(ctx[n], engine);
if (IS_ERR(this)) {
err = PTR_ERR(this);
goto out_file;
}
i915_request_add(rq);
if (rq) {
i915_request_await_dma_fence(this, &rq->fence);
i915_request_put(rq);
}
rq = i915_request_get(this);
i915_request_add(this);
}
if (i915_request_wait(rq, 0, HZ / 5) < 0) {
pr_err("Failed to populated %d contexts\n", nctx);
intel_gt_set_wedged(&i915->gt);
i915_request_put(rq);
err = -EIO;
goto out_file;
}
i915_request_put(rq);
times[1] = ktime_get_raw();
@ -100,13 +116,21 @@ static int live_nop_switch(void *arg)
for_each_prime_number_from(prime, 2, 8192) {
times[1] = ktime_get_raw();
rq = NULL;
for (n = 0; n < prime; n++) {
rq = igt_request_alloc(ctx[n % nctx], engine);
if (IS_ERR(rq)) {
err = PTR_ERR(rq);
struct i915_request *this;
this = igt_request_alloc(ctx[n % nctx], engine);
if (IS_ERR(this)) {
err = PTR_ERR(this);
goto out_file;
}
if (rq) { /* Force submission order */
i915_request_await_dma_fence(this, &rq->fence);
i915_request_put(rq);
}
/*
* This space is left intentionally blank.
*
@ -121,14 +145,18 @@ static int live_nop_switch(void *arg)
* for latency.
*/
i915_request_add(rq);
rq = i915_request_get(this);
i915_request_add(this);
}
GEM_BUG_ON(!rq);
if (i915_request_wait(rq, 0, HZ / 5) < 0) {
pr_err("Switching between %ld contexts timed out\n",
prime);
intel_gt_set_wedged(&i915->gt);
i915_request_put(rq);
break;
}
i915_request_put(rq);
times[1] = ktime_sub(ktime_get_raw(), times[1]);
if (prime == 2)
@ -149,7 +177,7 @@ static int live_nop_switch(void *arg)
}
out_file:
mock_file_free(i915, file);
fput(file);
return err;
}
@ -255,7 +283,7 @@ static int live_parallel_switch(void *arg)
int (* const *fn)(void *arg);
struct i915_gem_context *ctx;
struct intel_context *ce;
struct drm_file *file;
struct file *file;
int n, m, count;
int err = 0;
@ -309,7 +337,7 @@ static int live_parallel_switch(void *arg)
if (!data[m].ce[0])
continue;
ce = intel_context_create(ctx, data[m].ce[0]->engine);
ce = intel_context_create(data[m].ce[0]->engine);
if (IS_ERR(ce))
goto out;
@ -377,7 +405,7 @@ out:
}
kfree(data);
out_file:
mock_file_free(i915, file);
fput(file);
return err;
}
@ -502,17 +530,17 @@ out_unmap:
return err;
}
static int file_add_object(struct drm_file *file,
struct drm_i915_gem_object *obj)
static int file_add_object(struct file *file, struct drm_i915_gem_object *obj)
{
int err;
GEM_BUG_ON(obj->base.handle_count);
/* tie the object to the drm_file for easy reaping */
err = idr_alloc(&file->object_idr, &obj->base, 1, 0, GFP_KERNEL);
err = idr_alloc(&to_drm_file(file)->object_idr,
&obj->base, 1, 0, GFP_KERNEL);
if (err < 0)
return err;
return err;
i915_gem_object_get(obj);
obj->base.handle_count++;
@ -521,7 +549,7 @@ static int file_add_object(struct drm_file *file,
static struct drm_i915_gem_object *
create_test_object(struct i915_address_space *vm,
struct drm_file *file,
struct file *file,
struct list_head *objects)
{
struct drm_i915_gem_object *obj;
@ -621,9 +649,9 @@ static int igt_ctx_exec(void *arg)
unsigned long ncontexts, ndwords, dw;
struct i915_request *tq[5] = {};
struct igt_live_test t;
struct drm_file *file;
IGT_TIMEOUT(end_time);
LIST_HEAD(objects);
struct file *file;
if (!intel_engine_can_store_dword(engine))
continue;
@ -716,7 +744,7 @@ out_file:
if (igt_live_test_end(&t))
err = -EIO;
mock_file_free(i915, file);
fput(file);
if (err)
return err;
@ -733,7 +761,7 @@ static int igt_shared_ctx_exec(void *arg)
struct i915_gem_context *parent;
struct intel_engine_cs *engine;
struct igt_live_test t;
struct drm_file *file;
struct file *file;
int err = 0;
/*
@ -786,14 +814,15 @@ static int igt_shared_ctx_exec(void *arg)
}
mutex_lock(&ctx->mutex);
__assign_ppgtt(ctx, parent->vm);
__assign_ppgtt(ctx, ctx_vm(parent));
mutex_unlock(&ctx->mutex);
ce = i915_gem_context_get_engine(ctx, engine->legacy_idx);
GEM_BUG_ON(IS_ERR(ce));
if (!obj) {
obj = create_test_object(parent->vm, file, &objects);
obj = create_test_object(ctx_vm(parent),
file, &objects);
if (IS_ERR(obj)) {
err = PTR_ERR(obj);
intel_context_put(ce);
@ -854,7 +883,7 @@ out_test:
if (igt_live_test_end(&t))
err = -EIO;
out_file:
mock_file_free(i915, file);
fput(file);
return err;
}
@ -1140,8 +1169,7 @@ out:
igt_spinner_end(spin);
if ((flags & TEST_IDLE) && ret == 0) {
ret = intel_gt_wait_for_idle(ce->engine->gt,
MAX_SCHEDULE_TIMEOUT);
ret = igt_flush_test(ce->engine->i915);
if (ret)
return ret;
@ -1163,9 +1191,11 @@ __sseu_test(const char *name,
struct igt_spinner *spin = NULL;
int ret;
intel_engine_pm_get(ce->engine);
ret = __sseu_prepare(name, flags, ce, &spin);
if (ret)
return ret;
goto out_pm;
ret = intel_context_reconfigure_sseu(ce, sseu);
if (ret)
@ -1180,6 +1210,8 @@ out_spin:
igt_spinner_fini(spin);
kfree(spin);
}
out_pm:
intel_engine_pm_put(ce->engine);
return ret;
}
@ -1232,8 +1264,7 @@ __igt_ctx_sseu(struct drm_i915_private *i915,
hweight32(engine->sseu.slice_mask),
hweight32(pg_sseu.slice_mask));
ce = intel_context_create(engine->kernel_context->gem_context,
engine);
ce = intel_context_create(engine);
if (IS_ERR(ce)) {
ret = PTR_ERR(ce);
goto out_put;
@ -1311,16 +1342,18 @@ static int igt_ctx_sseu(void *arg)
static int igt_ctx_readonly(void *arg)
{
struct drm_i915_private *i915 = arg;
unsigned long idx, ndwords, dw, num_engines;
struct drm_i915_gem_object *obj = NULL;
struct i915_request *tq[5] = {};
struct i915_gem_engines_iter it;
struct i915_address_space *vm;
struct i915_gem_context *ctx;
unsigned long idx, ndwords, dw;
struct intel_context *ce;
struct igt_live_test t;
struct drm_file *file;
I915_RND_STATE(prng);
IGT_TIMEOUT(end_time);
LIST_HEAD(objects);
struct file *file;
int err = -ENODEV;
/*
@ -1343,21 +1376,21 @@ static int igt_ctx_readonly(void *arg)
goto out_file;
}
rcu_read_lock();
vm = rcu_dereference(ctx->vm) ?: &i915->ggtt.alias->vm;
vm = ctx_vm(ctx) ?: &i915->ggtt.alias->vm;
if (!vm || !vm->has_read_only) {
rcu_read_unlock();
err = 0;
goto out_file;
}
rcu_read_unlock();
num_engines = 0;
for_each_gem_engine(ce, i915_gem_context_lock_engines(ctx), it)
if (intel_engine_can_store_dword(ce->engine))
num_engines++;
i915_gem_context_unlock_engines(ctx);
ndwords = 0;
dw = 0;
while (!time_after(jiffies, end_time)) {
struct i915_gem_engines_iter it;
struct intel_context *ce;
for_each_gem_engine(ce,
i915_gem_context_lock_engines(ctx), it) {
if (!intel_engine_can_store_dword(ce->engine))
@ -1380,7 +1413,7 @@ static int igt_ctx_readonly(void *arg)
pr_err("Failed to fill dword %lu [%lu/%lu] with gpu (%s) [full-ppgtt? %s], err=%d\n",
ndwords, dw, max_dwords(obj),
ce->engine->name,
yesno(!!rcu_access_pointer(ctx->vm)),
yesno(!!ctx_vm(ctx)),
err);
i915_gem_context_unlock_engines(ctx);
goto out_file;
@ -1400,8 +1433,8 @@ static int igt_ctx_readonly(void *arg)
}
i915_gem_context_unlock_engines(ctx);
}
pr_info("Submitted %lu dwords (across %u engines)\n",
ndwords, RUNTIME_INFO(i915)->num_engines);
pr_info("Submitted %lu dwords (across %lu engines)\n",
ndwords, num_engines);
dw = 0;
idx = 0;
@ -1426,7 +1459,7 @@ out_file:
if (igt_live_test_end(&t))
err = -EIO;
mock_file_free(i915, file);
fput(file);
return err;
}
@ -1466,7 +1499,7 @@ static int write_to_scratch(struct i915_gem_context *ctx,
cmd = i915_gem_object_pin_map(obj, I915_MAP_WB);
if (IS_ERR(cmd)) {
err = PTR_ERR(cmd);
goto err;
goto out;
}
*cmd++ = MI_STORE_DWORD_IMM_GEN4;
@ -1488,12 +1521,12 @@ static int write_to_scratch(struct i915_gem_context *ctx,
vma = i915_vma_instance(obj, vm, NULL);
if (IS_ERR(vma)) {
err = PTR_ERR(vma);
goto err_vm;
goto out_vm;
}
err = i915_vma_pin(vma, 0, 0, PIN_USER | PIN_OFFSET_FIXED);
if (err)
goto err_vm;
goto out_vm;
err = check_scratch(vm, offset);
if (err)
@ -1517,22 +1550,20 @@ static int write_to_scratch(struct i915_gem_context *ctx,
if (err)
goto skip_request;
i915_vma_unpin_and_release(&vma, 0);
i915_vma_unpin(vma);
i915_request_add(rq);
i915_vm_put(vm);
return 0;
goto out_vm;
skip_request:
i915_request_skip(rq, err);
err_request:
i915_request_add(rq);
err_unpin:
i915_vma_unpin(vma);
err_vm:
out_vm:
i915_vm_put(vm);
err:
out:
i915_gem_object_put(obj);
return err;
}
@ -1560,7 +1591,7 @@ static int read_from_scratch(struct i915_gem_context *ctx,
cmd = i915_gem_object_pin_map(obj, I915_MAP_WB);
if (IS_ERR(cmd)) {
err = PTR_ERR(cmd);
goto err;
goto out;
}
memset(cmd, POISON_INUSE, PAGE_SIZE);
@ -1592,12 +1623,12 @@ static int read_from_scratch(struct i915_gem_context *ctx,
vma = i915_vma_instance(obj, vm, NULL);
if (IS_ERR(vma)) {
err = PTR_ERR(vma);
goto err_vm;
goto out_vm;
}
err = i915_vma_pin(vma, 0, 0, PIN_USER | PIN_OFFSET_FIXED);
if (err)
goto err_vm;
goto out_vm;
err = check_scratch(vm, offset);
if (err)
@ -1630,29 +1661,27 @@ static int read_from_scratch(struct i915_gem_context *ctx,
err = i915_gem_object_set_to_cpu_domain(obj, false);
i915_gem_object_unlock(obj);
if (err)
goto err_vm;
goto out_vm;
cmd = i915_gem_object_pin_map(obj, I915_MAP_WB);
if (IS_ERR(cmd)) {
err = PTR_ERR(cmd);
goto err_vm;
goto out_vm;
}
*value = cmd[result / sizeof(*cmd)];
i915_gem_object_unpin_map(obj);
i915_gem_object_put(obj);
return 0;
goto out_vm;
skip_request:
i915_request_skip(rq, err);
err_request:
i915_request_add(rq);
err_unpin:
i915_vma_unpin(vma);
err_vm:
out_vm:
i915_vm_put(vm);
err:
out:
i915_gem_object_put(obj);
return err;
}
@ -1661,11 +1690,11 @@ static int igt_vm_isolation(void *arg)
{
struct drm_i915_private *i915 = arg;
struct i915_gem_context *ctx_a, *ctx_b;
unsigned long num_engines, count;
struct intel_engine_cs *engine;
struct igt_live_test t;
struct drm_file *file;
I915_RND_STATE(prng);
unsigned long count;
struct file *file;
u64 vm_total;
int err;
@ -1698,14 +1727,15 @@ static int igt_vm_isolation(void *arg)
}
/* We can only test vm isolation, if the vm are distinct */
if (ctx_a->vm == ctx_b->vm)
if (ctx_vm(ctx_a) == ctx_vm(ctx_b))
goto out_file;
vm_total = ctx_a->vm->total;
GEM_BUG_ON(ctx_b->vm->total != vm_total);
vm_total = ctx_vm(ctx_a)->total;
GEM_BUG_ON(ctx_vm(ctx_b)->total != vm_total);
vm_total -= I915_GTT_PAGE_SIZE;
count = 0;
num_engines = 0;
for_each_uabi_engine(engine, i915) {
IGT_TIMEOUT(end_time);
unsigned long this = 0;
@ -1743,14 +1773,15 @@ static int igt_vm_isolation(void *arg)
this++;
}
count += this;
num_engines++;
}
pr_info("Checked %lu scratch offsets across %d engines\n",
count, RUNTIME_INFO(i915)->num_engines);
pr_info("Checked %lu scratch offsets across %lu engines\n",
count, num_engines);
out_file:
if (igt_live_test_end(&t))
err = -EIO;
mock_file_free(i915, file);
fput(file);
return err;
}

Просмотреть файл

@ -6,12 +6,14 @@
#include <linux/prime_numbers.h>
#include "gt/intel_engine_pm.h"
#include "gt/intel_gt.h"
#include "gt/intel_gt_pm.h"
#include "huge_gem_object.h"
#include "i915_selftest.h"
#include "selftests/i915_random.h"
#include "selftests/igt_flush_test.h"
#include "selftests/igt_mmap.h"
struct tile {
unsigned int width;
@ -161,7 +163,7 @@ static int check_partial_mapping(struct drm_i915_gem_object *obj,
kunmap(p);
out:
i915_vma_destroy(vma);
__i915_vma_put(vma);
return err;
}
@ -255,7 +257,7 @@ static int check_partial_mappings(struct drm_i915_gem_object *obj,
if (err)
return err;
i915_vma_destroy(vma);
__i915_vma_put(vma);
if (igt_timeout(end_time,
"%s: timed out after tiling=%d stride=%d\n",
@ -535,7 +537,7 @@ static int make_obj_busy(struct drm_i915_gem_object *obj)
if (err)
return err;
rq = i915_request_create(engine->kernel_context);
rq = intel_engine_create_kernel_request(engine);
if (IS_ERR(rq)) {
i915_vma_unpin(vma);
return PTR_ERR(rq);
@ -563,16 +565,16 @@ static bool assert_mmap_offset(struct drm_i915_private *i915,
int expected)
{
struct drm_i915_gem_object *obj;
int err;
struct i915_mmap_offset *mmo;
obj = i915_gem_object_create_internal(i915, size);
if (IS_ERR(obj))
return PTR_ERR(obj);
err = create_mmap_offset(obj);
mmo = mmap_offset_attach(obj, I915_MMAP_OFFSET_GTT, NULL);
i915_gem_object_put(obj);
return err == expected;
return PTR_ERR_OR_ZERO(mmo) == expected;
}
static void disable_retire_worker(struct drm_i915_private *i915)
@ -606,28 +608,50 @@ static int igt_mmap_offset_exhaustion(void *arg)
struct drm_i915_private *i915 = arg;
struct drm_mm *mm = &i915->drm.vma_offset_manager->vm_addr_space_mm;
struct drm_i915_gem_object *obj;
struct drm_mm_node resv, *hole;
u64 hole_start, hole_end;
int loop, err;
struct drm_mm_node *hole, *next;
struct i915_mmap_offset *mmo;
int loop, err = 0;
/* Disable background reaper */
disable_retire_worker(i915);
GEM_BUG_ON(!i915->gt.awake);
intel_gt_retire_requests(&i915->gt);
i915_gem_drain_freed_objects(i915);
/* Trim the device mmap space to only a page */
memset(&resv, 0, sizeof(resv));
drm_mm_for_each_hole(hole, mm, hole_start, hole_end) {
resv.start = hole_start;
resv.size = hole_end - hole_start - 1; /* PAGE_SIZE units */
mmap_offset_lock(i915);
err = drm_mm_reserve_node(mm, &resv);
mmap_offset_unlock(i915);
if (err) {
pr_err("Failed to trim VMA manager, err=%d\n", err);
mmap_offset_lock(i915);
loop = 1; /* PAGE_SIZE units */
list_for_each_entry_safe(hole, next, &mm->hole_stack, hole_stack) {
struct drm_mm_node *resv;
resv = kzalloc(sizeof(*resv), GFP_NOWAIT);
if (!resv) {
err = -ENOMEM;
goto out_park;
}
resv->start = drm_mm_hole_node_start(hole) + loop;
resv->size = hole->hole_size - loop;
resv->color = -1ul;
loop = 0;
if (!resv->size) {
kfree(resv);
continue;
}
pr_debug("Reserving hole [%llx + %llx]\n",
resv->start, resv->size);
err = drm_mm_reserve_node(mm, resv);
if (err) {
pr_err("Failed to trim VMA manager, err=%d\n", err);
kfree(resv);
goto out_park;
}
break;
}
GEM_BUG_ON(!list_is_singular(&mm->hole_stack));
mmap_offset_unlock(i915);
/* Just fits! */
if (!assert_mmap_offset(i915, PAGE_SIZE, 0)) {
@ -650,9 +674,10 @@ static int igt_mmap_offset_exhaustion(void *arg)
goto out;
}
err = create_mmap_offset(obj);
if (err) {
mmo = mmap_offset_attach(obj, I915_MMAP_OFFSET_GTT, NULL);
if (IS_ERR(mmo)) {
pr_err("Unable to insert object into reclaimed hole\n");
err = PTR_ERR(mmo);
goto err_obj;
}
@ -684,9 +709,15 @@ static int igt_mmap_offset_exhaustion(void *arg)
out:
mmap_offset_lock(i915);
drm_mm_remove_node(&resv);
mmap_offset_unlock(i915);
out_park:
drm_mm_for_each_node_safe(hole, next, mm) {
if (hole->color != -1ul)
continue;
drm_mm_remove_node(hole);
kfree(hole);
}
mmap_offset_unlock(i915);
restore_retire_worker(i915);
return err;
err_obj:
@ -694,12 +725,258 @@ err_obj:
goto out;
}
#define expand32(x) (((x) << 0) | ((x) << 8) | ((x) << 16) | ((x) << 24))
static int igt_mmap(void *arg, enum i915_mmap_type type)
{
struct drm_i915_private *i915 = arg;
struct drm_i915_gem_object *obj;
struct i915_mmap_offset *mmo;
struct vm_area_struct *area;
unsigned long addr;
void *vaddr;
int err = 0, i;
if (!i915_ggtt_has_aperture(&i915->ggtt))
return 0;
obj = i915_gem_object_create_internal(i915, PAGE_SIZE);
if (IS_ERR(obj))
return PTR_ERR(obj);
vaddr = i915_gem_object_pin_map(obj, I915_MAP_WB);
if (IS_ERR(vaddr)) {
err = PTR_ERR(vaddr);
goto out;
}
memset(vaddr, POISON_INUSE, PAGE_SIZE);
i915_gem_object_flush_map(obj);
i915_gem_object_unpin_map(obj);
mmo = mmap_offset_attach(obj, type, NULL);
if (IS_ERR(mmo)) {
err = PTR_ERR(mmo);
goto out;
}
addr = igt_mmap_node(i915, &mmo->vma_node, 0, PROT_WRITE, MAP_SHARED);
if (IS_ERR_VALUE(addr)) {
err = addr;
goto out;
}
pr_debug("igt_mmap() @ %lx\n", addr);
area = find_vma(current->mm, addr);
if (!area) {
pr_err("Did not create a vm_area_struct for the mmap\n");
err = -EINVAL;
goto out_unmap;
}
if (area->vm_private_data != mmo) {
pr_err("vm_area_struct did not point back to our mmap_offset object!\n");
err = -EINVAL;
goto out_unmap;
}
for (i = 0; i < PAGE_SIZE / sizeof(u32); i++) {
u32 __user *ux = u64_to_user_ptr((u64)(addr + i * sizeof(*ux)));
u32 x;
if (get_user(x, ux)) {
pr_err("Unable to read from mmap, offset:%zd\n",
i * sizeof(x));
err = -EFAULT;
break;
}
if (x != expand32(POISON_INUSE)) {
pr_err("Read incorrect value from mmap, offset:%zd, found:%x, expected:%x\n",
i * sizeof(x), x, expand32(POISON_INUSE));
err = -EINVAL;
break;
}
x = expand32(POISON_FREE);
if (put_user(x, ux)) {
pr_err("Unable to write to mmap, offset:%zd\n",
i * sizeof(x));
err = -EFAULT;
break;
}
}
out_unmap:
vm_munmap(addr, PAGE_SIZE);
vaddr = i915_gem_object_pin_map(obj, I915_MAP_FORCE_WC);
if (IS_ERR(vaddr)) {
err = PTR_ERR(vaddr);
goto out;
}
if (err == 0 && memchr_inv(vaddr, POISON_FREE, PAGE_SIZE)) {
pr_err("Write via mmap did not land in backing store\n");
err = -EINVAL;
}
i915_gem_object_unpin_map(obj);
out:
i915_gem_object_put(obj);
return err;
}
static int igt_mmap_gtt(void *arg)
{
return igt_mmap(arg, I915_MMAP_TYPE_GTT);
}
static int igt_mmap_cpu(void *arg)
{
return igt_mmap(arg, I915_MMAP_TYPE_WC);
}
static int check_present_pte(pte_t *pte, unsigned long addr, void *data)
{
if (!pte_present(*pte) || pte_none(*pte)) {
pr_err("missing PTE:%lx\n",
(addr - (unsigned long)data) >> PAGE_SHIFT);
return -EINVAL;
}
return 0;
}
static int check_absent_pte(pte_t *pte, unsigned long addr, void *data)
{
if (pte_present(*pte) && !pte_none(*pte)) {
pr_err("present PTE:%lx; expected to be revoked\n",
(addr - (unsigned long)data) >> PAGE_SHIFT);
return -EINVAL;
}
return 0;
}
static int check_present(unsigned long addr, unsigned long len)
{
return apply_to_page_range(current->mm, addr, len,
check_present_pte, (void *)addr);
}
static int check_absent(unsigned long addr, unsigned long len)
{
return apply_to_page_range(current->mm, addr, len,
check_absent_pte, (void *)addr);
}
static int prefault_range(u64 start, u64 len)
{
const char __user *addr, *end;
char __maybe_unused c;
int err;
addr = u64_to_user_ptr(start);
end = addr + len;
for (; addr < end; addr += PAGE_SIZE) {
err = __get_user(c, addr);
if (err)
return err;
}
return __get_user(c, end - 1);
}
static int igt_mmap_revoke(void *arg, enum i915_mmap_type type)
{
struct drm_i915_private *i915 = arg;
struct drm_i915_gem_object *obj;
struct i915_mmap_offset *mmo;
unsigned long addr;
int err;
if (!i915_ggtt_has_aperture(&i915->ggtt))
return 0;
obj = i915_gem_object_create_internal(i915, SZ_4M);
if (IS_ERR(obj))
return PTR_ERR(obj);
mmo = mmap_offset_attach(obj, type, NULL);
if (IS_ERR(mmo)) {
err = PTR_ERR(mmo);
goto out;
}
addr = igt_mmap_node(i915, &mmo->vma_node, 0, PROT_WRITE, MAP_SHARED);
if (IS_ERR_VALUE(addr)) {
err = addr;
goto out;
}
err = prefault_range(addr, obj->base.size);
if (err)
goto out_unmap;
GEM_BUG_ON(mmo->mmap_type == I915_MMAP_TYPE_GTT &&
!atomic_read(&obj->bind_count));
err = check_present(addr, obj->base.size);
if (err)
goto out_unmap;
/*
* After unbinding the object from the GGTT, its address may be reused
* for other objects. Ergo we have to revoke the previous mmap PTE
* access as it no longer points to the same object.
*/
err = i915_gem_object_unbind(obj, I915_GEM_OBJECT_UNBIND_ACTIVE);
if (err) {
pr_err("Failed to unbind object!\n");
goto out_unmap;
}
GEM_BUG_ON(atomic_read(&obj->bind_count));
if (type != I915_MMAP_TYPE_GTT) {
__i915_gem_object_put_pages(obj);
if (i915_gem_object_has_pages(obj)) {
pr_err("Failed to put-pages object!\n");
err = -EINVAL;
goto out_unmap;
}
}
err = check_absent(addr, obj->base.size);
if (err)
goto out_unmap;
out_unmap:
vm_munmap(addr, obj->base.size);
out:
i915_gem_object_put(obj);
return err;
}
static int igt_mmap_gtt_revoke(void *arg)
{
return igt_mmap_revoke(arg, I915_MMAP_TYPE_GTT);
}
static int igt_mmap_cpu_revoke(void *arg)
{
return igt_mmap_revoke(arg, I915_MMAP_TYPE_WC);
}
int i915_gem_mman_live_selftests(struct drm_i915_private *i915)
{
static const struct i915_subtest tests[] = {
SUBTEST(igt_partial_tiling),
SUBTEST(igt_smoke_tiling),
SUBTEST(igt_mmap_offset_exhaustion),
SUBTEST(igt_mmap_gtt),
SUBTEST(igt_mmap_cpu),
SUBTEST(igt_mmap_gtt_revoke),
SUBTEST(igt_mmap_cpu_revoke),
};
return i915_subtests(tests, i915);

Просмотреть файл

@ -41,6 +41,7 @@ static int __perf_fill_blt(struct drm_i915_gem_object *obj)
if (!engine)
return 0;
intel_engine_pm_get(engine);
for (pass = 0; pass < ARRAY_SIZE(t); pass++) {
struct intel_context *ce = engine->kernel_context;
ktime_t t0, t1;
@ -49,17 +50,20 @@ static int __perf_fill_blt(struct drm_i915_gem_object *obj)
err = i915_gem_object_fill_blt(obj, ce, 0);
if (err)
return err;
break;
err = i915_gem_object_wait(obj,
I915_WAIT_ALL,
MAX_SCHEDULE_TIMEOUT);
if (err)
return err;
break;
t1 = ktime_get();
t[pass] = ktime_sub(t1, t0);
}
intel_engine_pm_put(engine);
if (err)
return err;
sort(t, ARRAY_SIZE(t), sizeof(*t), wrap_ktime_compare, NULL);
pr_info("%s: blt %zd KiB fill: %lld MiB/s\n",
@ -109,6 +113,7 @@ static int __perf_copy_blt(struct drm_i915_gem_object *src,
struct intel_engine_cs *engine;
ktime_t t[5];
int pass;
int err = 0;
engine = intel_engine_lookup_user(i915,
I915_ENGINE_CLASS_COPY,
@ -116,26 +121,29 @@ static int __perf_copy_blt(struct drm_i915_gem_object *src,
if (!engine)
return 0;
intel_engine_pm_get(engine);
for (pass = 0; pass < ARRAY_SIZE(t); pass++) {
struct intel_context *ce = engine->kernel_context;
ktime_t t0, t1;
int err;
t0 = ktime_get();
err = i915_gem_object_copy_blt(src, dst, ce);
if (err)
return err;
break;
err = i915_gem_object_wait(dst,
I915_WAIT_ALL,
MAX_SCHEDULE_TIMEOUT);
if (err)
return err;
break;
t1 = ktime_get();
t[pass] = ktime_sub(t1, t0);
}
intel_engine_pm_put(engine);
if (err)
return err;
sort(t, ARRAY_SIZE(t), sizeof(*t), wrap_ktime_compare, NULL);
pr_info("%s: blt %zd KiB copy: %lld MiB/s\n",
@ -186,6 +194,8 @@ err_src:
struct igt_thread_arg {
struct drm_i915_private *i915;
struct i915_gem_context *ctx;
struct file *file;
struct rnd_state prng;
unsigned int n_cpus;
};
@ -198,24 +208,20 @@ static int igt_fill_blt_thread(void *arg)
struct drm_i915_gem_object *obj;
struct i915_gem_context *ctx;
struct intel_context *ce;
struct drm_file *file;
unsigned int prio;
IGT_TIMEOUT(end);
int err;
file = mock_file(i915);
if (IS_ERR(file))
return PTR_ERR(file);
ctx = thread->ctx;
if (!ctx) {
ctx = live_context(i915, thread->file);
if (IS_ERR(ctx))
return PTR_ERR(ctx);
ctx = live_context(i915, file);
if (IS_ERR(ctx)) {
err = PTR_ERR(ctx);
goto out_file;
prio = i915_prandom_u32_max_state(I915_PRIORITY_MAX, prng);
ctx->sched.priority = I915_USER_PRIORITY(prio);
}
prio = i915_prandom_u32_max_state(I915_PRIORITY_MAX, prng);
ctx->sched.priority = I915_USER_PRIORITY(prio);
ce = i915_gem_context_get_engine(ctx, BCS0);
GEM_BUG_ON(IS_ERR(ce));
@ -300,8 +306,6 @@ err_flush:
err = 0;
intel_context_put(ce);
out_file:
mock_file_free(i915, file);
return err;
}
@ -313,24 +317,20 @@ static int igt_copy_blt_thread(void *arg)
struct drm_i915_gem_object *src, *dst;
struct i915_gem_context *ctx;
struct intel_context *ce;
struct drm_file *file;
unsigned int prio;
IGT_TIMEOUT(end);
int err;
file = mock_file(i915);
if (IS_ERR(file))
return PTR_ERR(file);
ctx = thread->ctx;
if (!ctx) {
ctx = live_context(i915, thread->file);
if (IS_ERR(ctx))
return PTR_ERR(ctx);
ctx = live_context(i915, file);
if (IS_ERR(ctx)) {
err = PTR_ERR(ctx);
goto out_file;
prio = i915_prandom_u32_max_state(I915_PRIORITY_MAX, prng);
ctx->sched.priority = I915_USER_PRIORITY(prio);
}
prio = i915_prandom_u32_max_state(I915_PRIORITY_MAX, prng);
ctx->sched.priority = I915_USER_PRIORITY(prio);
ce = i915_gem_context_get_engine(ctx, BCS0);
GEM_BUG_ON(IS_ERR(ce));
@ -431,19 +431,18 @@ err_flush:
err = 0;
intel_context_put(ce);
out_file:
mock_file_free(i915, file);
return err;
}
static int igt_threaded_blt(struct drm_i915_private *i915,
int (*blt_fn)(void *arg))
int (*blt_fn)(void *arg),
unsigned int flags)
#define SINGLE_CTX BIT(0)
{
struct igt_thread_arg *thread;
struct task_struct **tsk;
unsigned int n_cpus, i;
I915_RND_STATE(prng);
unsigned int n_cpus;
unsigned int i;
int err = 0;
n_cpus = num_online_cpus() + 1;
@ -453,13 +452,27 @@ static int igt_threaded_blt(struct drm_i915_private *i915,
return 0;
thread = kcalloc(n_cpus, sizeof(struct igt_thread_arg), GFP_KERNEL);
if (!thread) {
kfree(tsk);
return 0;
if (!thread)
goto out_tsk;
thread[0].file = mock_file(i915);
if (IS_ERR(thread[0].file)) {
err = PTR_ERR(thread[0].file);
goto out_thread;
}
if (flags & SINGLE_CTX) {
thread[0].ctx = live_context(i915, thread[0].file);
if (IS_ERR(thread[0].ctx)) {
err = PTR_ERR(thread[0].ctx);
goto out_file;
}
}
for (i = 0; i < n_cpus; ++i) {
thread[i].i915 = i915;
thread[i].file = thread[0].file;
thread[i].ctx = thread[0].ctx;
thread[i].n_cpus = n_cpus;
thread[i].prng =
I915_RND_STATE_INITIALIZER(prandom_u32_state(&prng));
@ -488,29 +501,42 @@ static int igt_threaded_blt(struct drm_i915_private *i915,
put_task_struct(tsk[i]);
}
kfree(tsk);
out_file:
fput(thread[0].file);
out_thread:
kfree(thread);
out_tsk:
kfree(tsk);
return err;
}
static int igt_fill_blt(void *arg)
{
return igt_threaded_blt(arg, igt_fill_blt_thread);
return igt_threaded_blt(arg, igt_fill_blt_thread, 0);
}
static int igt_fill_blt_ctx0(void *arg)
{
return igt_threaded_blt(arg, igt_fill_blt_thread, SINGLE_CTX);
}
static int igt_copy_blt(void *arg)
{
return igt_threaded_blt(arg, igt_copy_blt_thread);
return igt_threaded_blt(arg, igt_copy_blt_thread, 0);
}
static int igt_copy_blt_ctx0(void *arg)
{
return igt_threaded_blt(arg, igt_copy_blt_thread, SINGLE_CTX);
}
int i915_gem_object_blt_live_selftests(struct drm_i915_private *i915)
{
static const struct i915_subtest tests[] = {
SUBTEST(perf_fill_blt),
SUBTEST(perf_copy_blt),
SUBTEST(igt_fill_blt),
SUBTEST(igt_fill_blt_ctx0),
SUBTEST(igt_copy_blt),
SUBTEST(igt_copy_blt_ctx0),
};
if (intel_gt_is_wedged(&i915->gt))
@ -521,3 +547,16 @@ int i915_gem_object_blt_live_selftests(struct drm_i915_private *i915)
return i915_live_subtests(tests, i915);
}
int i915_gem_object_blt_perf_selftests(struct drm_i915_private *i915)
{
static const struct i915_subtest tests[] = {
SUBTEST(perf_fill_blt),
SUBTEST(perf_copy_blt),
};
if (intel_gt_is_wedged(&i915->gt))
return 0;
return i915_live_subtests(tests, i915);
}

Просмотреть файл

@ -5,6 +5,7 @@
*/
#include "mock_context.h"
#include "selftests/mock_drm.h"
#include "selftests/mock_gtt.h"
struct i915_gem_context *
@ -36,9 +37,7 @@ mock_context(struct drm_i915_private *i915,
if (name) {
struct i915_ppgtt *ppgtt;
ctx->name = kstrdup(name, GFP_KERNEL);
if (!ctx->name)
goto err_put;
strncpy(ctx->name, name, sizeof(ctx->name));
ppgtt = mock_ppgtt(i915, name);
if (!ppgtt)
@ -74,7 +73,7 @@ void mock_init_contexts(struct drm_i915_private *i915)
}
struct i915_gem_context *
live_context(struct drm_i915_private *i915, struct drm_file *file)
live_context(struct drm_i915_private *i915, struct file *file)
{
struct i915_gem_context *ctx;
int err;
@ -83,7 +82,7 @@ live_context(struct drm_i915_private *i915, struct drm_file *file)
if (IS_ERR(ctx))
return ctx;
err = gem_context_register(ctx, file->driver_priv);
err = gem_context_register(ctx, to_drm_file(file)->driver_priv);
if (err < 0)
goto err_ctx;
@ -97,7 +96,16 @@ err_ctx:
struct i915_gem_context *
kernel_context(struct drm_i915_private *i915)
{
return i915_gem_context_create_kernel(i915, I915_PRIORITY_NORMAL);
struct i915_gem_context *ctx;
ctx = i915_gem_create_context(i915, 0);
if (IS_ERR(ctx))
return ctx;
i915_gem_context_clear_bannable(ctx);
i915_gem_context_set_persistence(ctx);
return ctx;
}
void kernel_context_close(struct i915_gem_context *ctx)

Просмотреть файл

@ -7,6 +7,9 @@
#ifndef __MOCK_CONTEXT_H
#define __MOCK_CONTEXT_H
struct file;
struct drm_i915_private;
void mock_init_contexts(struct drm_i915_private *i915);
struct i915_gem_context *
@ -16,7 +19,7 @@ mock_context(struct drm_i915_private *i915,
void mock_context_close(struct i915_gem_context *ctx);
struct i915_gem_context *
live_context(struct drm_i915_private *i915, struct drm_file *file);
live_context(struct drm_i915_private *i915, struct file *file);
struct i915_gem_context *kernel_context(struct drm_i915_private *i915);
void kernel_context_close(struct i915_gem_context *ctx);

Просмотреть файл

@ -14,7 +14,7 @@ struct mock_dmabuf {
struct page *pages[];
};
static struct mock_dmabuf *to_mock(struct dma_buf *buf)
static inline struct mock_dmabuf *to_mock(struct dma_buf *buf)
{
return buf->priv;
}

Просмотреть файл

@ -0,0 +1,36 @@
// SPDX-License-Identifier: MIT
/*
* Copyright © 2019 Intel Corporation
*/
#include <drm/drm_print.h>
#include "debugfs_engines.h"
#include "debugfs_gt.h"
#include "i915_drv.h" /* for_each_engine! */
#include "intel_engine.h"
static int engines_show(struct seq_file *m, void *data)
{
struct intel_gt *gt = m->private;
struct intel_engine_cs *engine;
enum intel_engine_id id;
struct drm_printer p;
p = drm_seq_file_printer(m);
for_each_engine(engine, gt, id)
intel_engine_dump(engine, &p, "%s\n", engine->name);
return 0;
}
DEFINE_GT_DEBUGFS_ATTRIBUTE(engines);
void debugfs_engines_register(struct intel_gt *gt, struct dentry *root)
{
static const struct debugfs_gt_file files[] = {
{ "engines", &engines_fops },
};
debugfs_gt_register_files(gt, root, files, ARRAY_SIZE(files));
}

Просмотреть файл

@ -0,0 +1,14 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2019 Intel Corporation
*/
#ifndef DEBUGFS_ENGINES_H
#define DEBUGFS_ENGINES_H
struct intel_gt;
struct dentry;
void debugfs_engines_register(struct intel_gt *gt, struct dentry *root);
#endif /* DEBUGFS_ENGINES_H */

Просмотреть файл

@ -0,0 +1,42 @@
// SPDX-License-Identifier: MIT
/*
* Copyright © 2019 Intel Corporation
*/
#include <linux/debugfs.h>
#include "debugfs_engines.h"
#include "debugfs_gt.h"
#include "debugfs_gt_pm.h"
#include "i915_drv.h"
void debugfs_gt_register(struct intel_gt *gt)
{
struct dentry *root;
if (!gt->i915->drm.primary->debugfs_root)
return;
root = debugfs_create_dir("gt", gt->i915->drm.primary->debugfs_root);
if (IS_ERR(root))
return;
debugfs_engines_register(gt, root);
debugfs_gt_pm_register(gt, root);
}
void debugfs_gt_register_files(struct intel_gt *gt,
struct dentry *root,
const struct debugfs_gt_file *files,
unsigned long count)
{
while (count--) {
if (!files->eval || files->eval(gt))
debugfs_create_file(files->name,
0444, root, gt,
files->fops);
files++;
}
}

Просмотреть файл

@ -0,0 +1,39 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2019 Intel Corporation
*/
#ifndef DEBUGFS_GT_H
#define DEBUGFS_GT_H
#include <linux/file.h>
struct intel_gt;
#define DEFINE_GT_DEBUGFS_ATTRIBUTE(__name) \
static int __name ## _open(struct inode *inode, struct file *file) \
{ \
return single_open(file, __name ## _show, inode->i_private); \
} \
static const struct file_operations __name ## _fops = { \
.owner = THIS_MODULE, \
.open = __name ## _open, \
.read = seq_read, \
.llseek = seq_lseek, \
.release = single_release, \
}
void debugfs_gt_register(struct intel_gt *gt);
struct debugfs_gt_file {
const char *name;
const struct file_operations *fops;
bool (*eval)(const struct intel_gt *gt);
};
void debugfs_gt_register_files(struct intel_gt *gt,
struct dentry *root,
const struct debugfs_gt_file *files,
unsigned long count);
#endif /* DEBUGFS_GT_H */

Просмотреть файл

@ -0,0 +1,601 @@
// SPDX-License-Identifier: MIT
/*
* Copyright © 2019 Intel Corporation
*/
#include <linux/seq_file.h>
#include "debugfs_gt.h"
#include "debugfs_gt_pm.h"
#include "i915_drv.h"
#include "intel_gt.h"
#include "intel_llc.h"
#include "intel_rc6.h"
#include "intel_rps.h"
#include "intel_runtime_pm.h"
#include "intel_sideband.h"
#include "intel_uncore.h"
static int fw_domains_show(struct seq_file *m, void *data)
{
struct intel_gt *gt = m->private;
struct intel_uncore *uncore = gt->uncore;
struct intel_uncore_forcewake_domain *fw_domain;
unsigned int tmp;
seq_printf(m, "user.bypass_count = %u\n",
uncore->user_forcewake_count);
for_each_fw_domain(fw_domain, uncore, tmp)
seq_printf(m, "%s.wake_count = %u\n",
intel_uncore_forcewake_domain_to_str(fw_domain->id),
READ_ONCE(fw_domain->wake_count));
return 0;
}
DEFINE_GT_DEBUGFS_ATTRIBUTE(fw_domains);
static void print_rc6_res(struct seq_file *m,
const char *title,
const i915_reg_t reg)
{
struct intel_gt *gt = m->private;
intel_wakeref_t wakeref;
with_intel_runtime_pm(gt->uncore->rpm, wakeref)
seq_printf(m, "%s %u (%llu us)\n", title,
intel_uncore_read(gt->uncore, reg),
intel_rc6_residency_us(&gt->rc6, reg));
}
static int vlv_drpc(struct seq_file *m)
{
struct intel_gt *gt = m->private;
struct intel_uncore *uncore = gt->uncore;
u32 rcctl1, pw_status;
pw_status = intel_uncore_read(uncore, VLV_GTLC_PW_STATUS);
rcctl1 = intel_uncore_read(uncore, GEN6_RC_CONTROL);
seq_printf(m, "RC6 Enabled: %s\n",
yesno(rcctl1 & (GEN7_RC_CTL_TO_MODE |
GEN6_RC_CTL_EI_MODE(1))));
seq_printf(m, "Render Power Well: %s\n",
(pw_status & VLV_GTLC_PW_RENDER_STATUS_MASK) ? "Up" : "Down");
seq_printf(m, "Media Power Well: %s\n",
(pw_status & VLV_GTLC_PW_MEDIA_STATUS_MASK) ? "Up" : "Down");
print_rc6_res(m, "Render RC6 residency since boot:", VLV_GT_RENDER_RC6);
print_rc6_res(m, "Media RC6 residency since boot:", VLV_GT_MEDIA_RC6);
return fw_domains_show(m, NULL);
}
static int gen6_drpc(struct seq_file *m)
{
struct intel_gt *gt = m->private;
struct drm_i915_private *i915 = gt->i915;
struct intel_uncore *uncore = gt->uncore;
u32 gt_core_status, rcctl1, rc6vids = 0;
u32 gen9_powergate_enable = 0, gen9_powergate_status = 0;
gt_core_status = intel_uncore_read_fw(uncore, GEN6_GT_CORE_STATUS);
rcctl1 = intel_uncore_read(uncore, GEN6_RC_CONTROL);
if (INTEL_GEN(i915) >= 9) {
gen9_powergate_enable =
intel_uncore_read(uncore, GEN9_PG_ENABLE);
gen9_powergate_status =
intel_uncore_read(uncore, GEN9_PWRGT_DOMAIN_STATUS);
}
if (INTEL_GEN(i915) <= 7)
sandybridge_pcode_read(i915, GEN6_PCODE_READ_RC6VIDS,
&rc6vids, NULL);
seq_printf(m, "RC1e Enabled: %s\n",
yesno(rcctl1 & GEN6_RC_CTL_RC1e_ENABLE));
seq_printf(m, "RC6 Enabled: %s\n",
yesno(rcctl1 & GEN6_RC_CTL_RC6_ENABLE));
if (INTEL_GEN(i915) >= 9) {
seq_printf(m, "Render Well Gating Enabled: %s\n",
yesno(gen9_powergate_enable & GEN9_RENDER_PG_ENABLE));
seq_printf(m, "Media Well Gating Enabled: %s\n",
yesno(gen9_powergate_enable & GEN9_MEDIA_PG_ENABLE));
}
seq_printf(m, "Deep RC6 Enabled: %s\n",
yesno(rcctl1 & GEN6_RC_CTL_RC6p_ENABLE));
seq_printf(m, "Deepest RC6 Enabled: %s\n",
yesno(rcctl1 & GEN6_RC_CTL_RC6pp_ENABLE));
seq_puts(m, "Current RC state: ");
switch (gt_core_status & GEN6_RCn_MASK) {
case GEN6_RC0:
if (gt_core_status & GEN6_CORE_CPD_STATE_MASK)
seq_puts(m, "Core Power Down\n");
else
seq_puts(m, "on\n");
break;
case GEN6_RC3:
seq_puts(m, "RC3\n");
break;
case GEN6_RC6:
seq_puts(m, "RC6\n");
break;
case GEN6_RC7:
seq_puts(m, "RC7\n");
break;
default:
seq_puts(m, "Unknown\n");
break;
}
seq_printf(m, "Core Power Down: %s\n",
yesno(gt_core_status & GEN6_CORE_CPD_STATE_MASK));
if (INTEL_GEN(i915) >= 9) {
seq_printf(m, "Render Power Well: %s\n",
(gen9_powergate_status &
GEN9_PWRGT_RENDER_STATUS_MASK) ? "Up" : "Down");
seq_printf(m, "Media Power Well: %s\n",
(gen9_powergate_status &
GEN9_PWRGT_MEDIA_STATUS_MASK) ? "Up" : "Down");
}
/* Not exactly sure what this is */
print_rc6_res(m, "RC6 \"Locked to RPn\" residency since boot:",
GEN6_GT_GFX_RC6_LOCKED);
print_rc6_res(m, "RC6 residency since boot:", GEN6_GT_GFX_RC6);
print_rc6_res(m, "RC6+ residency since boot:", GEN6_GT_GFX_RC6p);
print_rc6_res(m, "RC6++ residency since boot:", GEN6_GT_GFX_RC6pp);
if (INTEL_GEN(i915) <= 7) {
seq_printf(m, "RC6 voltage: %dmV\n",
GEN6_DECODE_RC6_VID(((rc6vids >> 0) & 0xff)));
seq_printf(m, "RC6+ voltage: %dmV\n",
GEN6_DECODE_RC6_VID(((rc6vids >> 8) & 0xff)));
seq_printf(m, "RC6++ voltage: %dmV\n",
GEN6_DECODE_RC6_VID(((rc6vids >> 16) & 0xff)));
}
return fw_domains_show(m, NULL);
}
static int ilk_drpc(struct seq_file *m)
{
struct intel_gt *gt = m->private;
struct intel_uncore *uncore = gt->uncore;
u32 rgvmodectl, rstdbyctl;
u16 crstandvid;
rgvmodectl = intel_uncore_read(uncore, MEMMODECTL);
rstdbyctl = intel_uncore_read(uncore, RSTDBYCTL);
crstandvid = intel_uncore_read16(uncore, CRSTANDVID);
seq_printf(m, "HD boost: %s\n", yesno(rgvmodectl & MEMMODE_BOOST_EN));
seq_printf(m, "Boost freq: %d\n",
(rgvmodectl & MEMMODE_BOOST_FREQ_MASK) >>
MEMMODE_BOOST_FREQ_SHIFT);
seq_printf(m, "HW control enabled: %s\n",
yesno(rgvmodectl & MEMMODE_HWIDLE_EN));
seq_printf(m, "SW control enabled: %s\n",
yesno(rgvmodectl & MEMMODE_SWMODE_EN));
seq_printf(m, "Gated voltage change: %s\n",
yesno(rgvmodectl & MEMMODE_RCLK_GATE));
seq_printf(m, "Starting frequency: P%d\n",
(rgvmodectl & MEMMODE_FSTART_MASK) >> MEMMODE_FSTART_SHIFT);
seq_printf(m, "Max P-state: P%d\n",
(rgvmodectl & MEMMODE_FMAX_MASK) >> MEMMODE_FMAX_SHIFT);
seq_printf(m, "Min P-state: P%d\n", (rgvmodectl & MEMMODE_FMIN_MASK));
seq_printf(m, "RS1 VID: %d\n", (crstandvid & 0x3f));
seq_printf(m, "RS2 VID: %d\n", ((crstandvid >> 8) & 0x3f));
seq_printf(m, "Render standby enabled: %s\n",
yesno(!(rstdbyctl & RCX_SW_EXIT)));
seq_puts(m, "Current RS state: ");
switch (rstdbyctl & RSX_STATUS_MASK) {
case RSX_STATUS_ON:
seq_puts(m, "on\n");
break;
case RSX_STATUS_RC1:
seq_puts(m, "RC1\n");
break;
case RSX_STATUS_RC1E:
seq_puts(m, "RC1E\n");
break;
case RSX_STATUS_RS1:
seq_puts(m, "RS1\n");
break;
case RSX_STATUS_RS2:
seq_puts(m, "RS2 (RC6)\n");
break;
case RSX_STATUS_RS3:
seq_puts(m, "RC3 (RC6+)\n");
break;
default:
seq_puts(m, "unknown\n");
break;
}
return 0;
}
static int drpc_show(struct seq_file *m, void *unused)
{
struct intel_gt *gt = m->private;
struct drm_i915_private *i915 = gt->i915;
intel_wakeref_t wakeref;
int err = -ENODEV;
with_intel_runtime_pm(gt->uncore->rpm, wakeref) {
if (IS_VALLEYVIEW(i915) || IS_CHERRYVIEW(i915))
err = vlv_drpc(m);
else if (INTEL_GEN(i915) >= 6)
err = gen6_drpc(m);
else
err = ilk_drpc(m);
}
return err;
}
DEFINE_GT_DEBUGFS_ATTRIBUTE(drpc);
static int frequency_show(struct seq_file *m, void *unused)
{
struct intel_gt *gt = m->private;
struct drm_i915_private *i915 = gt->i915;
struct intel_uncore *uncore = gt->uncore;
struct intel_rps *rps = &gt->rps;
intel_wakeref_t wakeref;
wakeref = intel_runtime_pm_get(uncore->rpm);
if (IS_GEN(i915, 5)) {
u16 rgvswctl = intel_uncore_read16(uncore, MEMSWCTL);
u16 rgvstat = intel_uncore_read16(uncore, MEMSTAT_ILK);
seq_printf(m, "Requested P-state: %d\n", (rgvswctl >> 8) & 0xf);
seq_printf(m, "Requested VID: %d\n", rgvswctl & 0x3f);
seq_printf(m, "Current VID: %d\n", (rgvstat & MEMSTAT_VID_MASK) >>
MEMSTAT_VID_SHIFT);
seq_printf(m, "Current P-state: %d\n",
(rgvstat & MEMSTAT_PSTATE_MASK) >> MEMSTAT_PSTATE_SHIFT);
} else if (IS_VALLEYVIEW(i915) || IS_CHERRYVIEW(i915)) {
u32 rpmodectl, freq_sts;
rpmodectl = intel_uncore_read(uncore, GEN6_RP_CONTROL);
seq_printf(m, "Video Turbo Mode: %s\n",
yesno(rpmodectl & GEN6_RP_MEDIA_TURBO));
seq_printf(m, "HW control enabled: %s\n",
yesno(rpmodectl & GEN6_RP_ENABLE));
seq_printf(m, "SW control enabled: %s\n",
yesno((rpmodectl & GEN6_RP_MEDIA_MODE_MASK) ==
GEN6_RP_MEDIA_SW_MODE));
vlv_punit_get(i915);
freq_sts = vlv_punit_read(i915, PUNIT_REG_GPU_FREQ_STS);
vlv_punit_put(i915);
seq_printf(m, "PUNIT_REG_GPU_FREQ_STS: 0x%08x\n", freq_sts);
seq_printf(m, "DDR freq: %d MHz\n", i915->mem_freq);
seq_printf(m, "actual GPU freq: %d MHz\n",
intel_gpu_freq(rps, (freq_sts >> 8) & 0xff));
seq_printf(m, "current GPU freq: %d MHz\n",
intel_gpu_freq(rps, rps->cur_freq));
seq_printf(m, "max GPU freq: %d MHz\n",
intel_gpu_freq(rps, rps->max_freq));
seq_printf(m, "min GPU freq: %d MHz\n",
intel_gpu_freq(rps, rps->min_freq));
seq_printf(m, "idle GPU freq: %d MHz\n",
intel_gpu_freq(rps, rps->idle_freq));
seq_printf(m, "efficient (RPe) frequency: %d MHz\n",
intel_gpu_freq(rps, rps->efficient_freq));
} else if (INTEL_GEN(i915) >= 6) {
u32 rp_state_limits;
u32 gt_perf_status;
u32 rp_state_cap;
u32 rpmodectl, rpinclimit, rpdeclimit;
u32 rpstat, cagf, reqf;
u32 rpupei, rpcurup, rpprevup;
u32 rpdownei, rpcurdown, rpprevdown;
u32 pm_ier, pm_imr, pm_isr, pm_iir, pm_mask;
int max_freq;
rp_state_limits = intel_uncore_read(uncore, GEN6_RP_STATE_LIMITS);
if (IS_GEN9_LP(i915)) {
rp_state_cap = intel_uncore_read(uncore, BXT_RP_STATE_CAP);
gt_perf_status = intel_uncore_read(uncore, BXT_GT_PERF_STATUS);
} else {
rp_state_cap = intel_uncore_read(uncore, GEN6_RP_STATE_CAP);
gt_perf_status = intel_uncore_read(uncore, GEN6_GT_PERF_STATUS);
}
/* RPSTAT1 is in the GT power well */
intel_uncore_forcewake_get(uncore, FORCEWAKE_ALL);
reqf = intel_uncore_read(uncore, GEN6_RPNSWREQ);
if (INTEL_GEN(i915) >= 9) {
reqf >>= 23;
} else {
reqf &= ~GEN6_TURBO_DISABLE;
if (IS_HASWELL(i915) || IS_BROADWELL(i915))
reqf >>= 24;
else
reqf >>= 25;
}
reqf = intel_gpu_freq(rps, reqf);
rpmodectl = intel_uncore_read(uncore, GEN6_RP_CONTROL);
rpinclimit = intel_uncore_read(uncore, GEN6_RP_UP_THRESHOLD);
rpdeclimit = intel_uncore_read(uncore, GEN6_RP_DOWN_THRESHOLD);
rpstat = intel_uncore_read(uncore, GEN6_RPSTAT1);
rpupei = intel_uncore_read(uncore, GEN6_RP_CUR_UP_EI) & GEN6_CURICONT_MASK;
rpcurup = intel_uncore_read(uncore, GEN6_RP_CUR_UP) & GEN6_CURBSYTAVG_MASK;
rpprevup = intel_uncore_read(uncore, GEN6_RP_PREV_UP) & GEN6_CURBSYTAVG_MASK;
rpdownei = intel_uncore_read(uncore, GEN6_RP_CUR_DOWN_EI) & GEN6_CURIAVG_MASK;
rpcurdown = intel_uncore_read(uncore, GEN6_RP_CUR_DOWN) & GEN6_CURBSYTAVG_MASK;
rpprevdown = intel_uncore_read(uncore, GEN6_RP_PREV_DOWN) & GEN6_CURBSYTAVG_MASK;
cagf = intel_rps_read_actual_frequency(rps);
intel_uncore_forcewake_put(uncore, FORCEWAKE_ALL);
if (INTEL_GEN(i915) >= 11) {
pm_ier = intel_uncore_read(uncore, GEN11_GPM_WGBOXPERF_INTR_ENABLE);
pm_imr = intel_uncore_read(uncore, GEN11_GPM_WGBOXPERF_INTR_MASK);
/*
* The equivalent to the PM ISR & IIR cannot be read
* without affecting the current state of the system
*/
pm_isr = 0;
pm_iir = 0;
} else if (INTEL_GEN(i915) >= 8) {
pm_ier = intel_uncore_read(uncore, GEN8_GT_IER(2));
pm_imr = intel_uncore_read(uncore, GEN8_GT_IMR(2));
pm_isr = intel_uncore_read(uncore, GEN8_GT_ISR(2));
pm_iir = intel_uncore_read(uncore, GEN8_GT_IIR(2));
} else {
pm_ier = intel_uncore_read(uncore, GEN6_PMIER);
pm_imr = intel_uncore_read(uncore, GEN6_PMIMR);
pm_isr = intel_uncore_read(uncore, GEN6_PMISR);
pm_iir = intel_uncore_read(uncore, GEN6_PMIIR);
}
pm_mask = intel_uncore_read(uncore, GEN6_PMINTRMSK);
seq_printf(m, "Video Turbo Mode: %s\n",
yesno(rpmodectl & GEN6_RP_MEDIA_TURBO));
seq_printf(m, "HW control enabled: %s\n",
yesno(rpmodectl & GEN6_RP_ENABLE));
seq_printf(m, "SW control enabled: %s\n",
yesno((rpmodectl & GEN6_RP_MEDIA_MODE_MASK) ==
GEN6_RP_MEDIA_SW_MODE));
seq_printf(m, "PM IER=0x%08x IMR=0x%08x, MASK=0x%08x\n",
pm_ier, pm_imr, pm_mask);
if (INTEL_GEN(i915) <= 10)
seq_printf(m, "PM ISR=0x%08x IIR=0x%08x\n",
pm_isr, pm_iir);
seq_printf(m, "pm_intrmsk_mbz: 0x%08x\n",
rps->pm_intrmsk_mbz);
seq_printf(m, "GT_PERF_STATUS: 0x%08x\n", gt_perf_status);
seq_printf(m, "Render p-state ratio: %d\n",
(gt_perf_status & (INTEL_GEN(i915) >= 9 ? 0x1ff00 : 0xff00)) >> 8);
seq_printf(m, "Render p-state VID: %d\n",
gt_perf_status & 0xff);
seq_printf(m, "Render p-state limit: %d\n",
rp_state_limits & 0xff);
seq_printf(m, "RPSTAT1: 0x%08x\n", rpstat);
seq_printf(m, "RPMODECTL: 0x%08x\n", rpmodectl);
seq_printf(m, "RPINCLIMIT: 0x%08x\n", rpinclimit);
seq_printf(m, "RPDECLIMIT: 0x%08x\n", rpdeclimit);
seq_printf(m, "RPNSWREQ: %dMHz\n", reqf);
seq_printf(m, "CAGF: %dMHz\n", cagf);
seq_printf(m, "RP CUR UP EI: %d (%dus)\n",
rpupei, GT_PM_INTERVAL_TO_US(i915, rpupei));
seq_printf(m, "RP CUR UP: %d (%dus)\n",
rpcurup, GT_PM_INTERVAL_TO_US(i915, rpcurup));
seq_printf(m, "RP PREV UP: %d (%dus)\n",
rpprevup, GT_PM_INTERVAL_TO_US(i915, rpprevup));
seq_printf(m, "Up threshold: %d%%\n",
rps->power.up_threshold);
seq_printf(m, "RP CUR DOWN EI: %d (%dus)\n",
rpdownei, GT_PM_INTERVAL_TO_US(i915, rpdownei));
seq_printf(m, "RP CUR DOWN: %d (%dus)\n",
rpcurdown, GT_PM_INTERVAL_TO_US(i915, rpcurdown));
seq_printf(m, "RP PREV DOWN: %d (%dus)\n",
rpprevdown, GT_PM_INTERVAL_TO_US(i915, rpprevdown));
seq_printf(m, "Down threshold: %d%%\n",
rps->power.down_threshold);
max_freq = (IS_GEN9_LP(i915) ? rp_state_cap >> 0 :
rp_state_cap >> 16) & 0xff;
max_freq *= (IS_GEN9_BC(i915) ||
INTEL_GEN(i915) >= 10 ? GEN9_FREQ_SCALER : 1);
seq_printf(m, "Lowest (RPN) frequency: %dMHz\n",
intel_gpu_freq(rps, max_freq));
max_freq = (rp_state_cap & 0xff00) >> 8;
max_freq *= (IS_GEN9_BC(i915) ||
INTEL_GEN(i915) >= 10 ? GEN9_FREQ_SCALER : 1);
seq_printf(m, "Nominal (RP1) frequency: %dMHz\n",
intel_gpu_freq(rps, max_freq));
max_freq = (IS_GEN9_LP(i915) ? rp_state_cap >> 16 :
rp_state_cap >> 0) & 0xff;
max_freq *= (IS_GEN9_BC(i915) ||
INTEL_GEN(i915) >= 10 ? GEN9_FREQ_SCALER : 1);
seq_printf(m, "Max non-overclocked (RP0) frequency: %dMHz\n",
intel_gpu_freq(rps, max_freq));
seq_printf(m, "Max overclocked frequency: %dMHz\n",
intel_gpu_freq(rps, rps->max_freq));
seq_printf(m, "Current freq: %d MHz\n",
intel_gpu_freq(rps, rps->cur_freq));
seq_printf(m, "Actual freq: %d MHz\n", cagf);
seq_printf(m, "Idle freq: %d MHz\n",
intel_gpu_freq(rps, rps->idle_freq));
seq_printf(m, "Min freq: %d MHz\n",
intel_gpu_freq(rps, rps->min_freq));
seq_printf(m, "Boost freq: %d MHz\n",
intel_gpu_freq(rps, rps->boost_freq));
seq_printf(m, "Max freq: %d MHz\n",
intel_gpu_freq(rps, rps->max_freq));
seq_printf(m,
"efficient (RPe) frequency: %d MHz\n",
intel_gpu_freq(rps, rps->efficient_freq));
} else {
seq_puts(m, "no P-state info available\n");
}
seq_printf(m, "Current CD clock frequency: %d kHz\n", i915->cdclk.hw.cdclk);
seq_printf(m, "Max CD clock frequency: %d kHz\n", i915->max_cdclk_freq);
seq_printf(m, "Max pixel clock frequency: %d kHz\n", i915->max_dotclk_freq);
intel_runtime_pm_put(uncore->rpm, wakeref);
return 0;
}
DEFINE_GT_DEBUGFS_ATTRIBUTE(frequency);
static int llc_show(struct seq_file *m, void *data)
{
struct intel_gt *gt = m->private;
struct drm_i915_private *i915 = gt->i915;
const bool edram = INTEL_GEN(i915) > 8;
struct intel_rps *rps = &gt->rps;
unsigned int max_gpu_freq, min_gpu_freq;
intel_wakeref_t wakeref;
int gpu_freq, ia_freq;
seq_printf(m, "LLC: %s\n", yesno(HAS_LLC(i915)));
seq_printf(m, "%s: %uMB\n", edram ? "eDRAM" : "eLLC",
i915->edram_size_mb);
min_gpu_freq = rps->min_freq;
max_gpu_freq = rps->max_freq;
if (IS_GEN9_BC(i915) || INTEL_GEN(i915) >= 10) {
/* Convert GT frequency to 50 HZ units */
min_gpu_freq /= GEN9_FREQ_SCALER;
max_gpu_freq /= GEN9_FREQ_SCALER;
}
seq_puts(m, "GPU freq (MHz)\tEffective CPU freq (MHz)\tEffective Ring freq (MHz)\n");
wakeref = intel_runtime_pm_get(gt->uncore->rpm);
for (gpu_freq = min_gpu_freq; gpu_freq <= max_gpu_freq; gpu_freq++) {
ia_freq = gpu_freq;
sandybridge_pcode_read(i915,
GEN6_PCODE_READ_MIN_FREQ_TABLE,
&ia_freq, NULL);
seq_printf(m, "%d\t\t%d\t\t\t\t%d\n",
intel_gpu_freq(rps,
(gpu_freq *
(IS_GEN9_BC(i915) ||
INTEL_GEN(i915) >= 10 ?
GEN9_FREQ_SCALER : 1))),
((ia_freq >> 0) & 0xff) * 100,
((ia_freq >> 8) & 0xff) * 100);
}
intel_runtime_pm_put(gt->uncore->rpm, wakeref);
return 0;
}
static bool llc_eval(const struct intel_gt *gt)
{
return HAS_LLC(gt->i915);
}
DEFINE_GT_DEBUGFS_ATTRIBUTE(llc);
static const char *rps_power_to_str(unsigned int power)
{
static const char * const strings[] = {
[LOW_POWER] = "low power",
[BETWEEN] = "mixed",
[HIGH_POWER] = "high power",
};
if (power >= ARRAY_SIZE(strings) || !strings[power])
return "unknown";
return strings[power];
}
static int rps_boost_show(struct seq_file *m, void *data)
{
struct intel_gt *gt = m->private;
struct drm_i915_private *i915 = gt->i915;
struct intel_rps *rps = &gt->rps;
seq_printf(m, "RPS enabled? %d\n", rps->enabled);
seq_printf(m, "GPU busy? %s\n", yesno(gt->awake));
seq_printf(m, "Boosts outstanding? %d\n",
atomic_read(&rps->num_waiters));
seq_printf(m, "Interactive? %d\n", READ_ONCE(rps->power.interactive));
seq_printf(m, "Frequency requested %d, actual %d\n",
intel_gpu_freq(rps, rps->cur_freq),
intel_rps_read_actual_frequency(rps));
seq_printf(m, " min hard:%d, soft:%d; max soft:%d, hard:%d\n",
intel_gpu_freq(rps, rps->min_freq),
intel_gpu_freq(rps, rps->min_freq_softlimit),
intel_gpu_freq(rps, rps->max_freq_softlimit),
intel_gpu_freq(rps, rps->max_freq));
seq_printf(m, " idle:%d, efficient:%d, boost:%d\n",
intel_gpu_freq(rps, rps->idle_freq),
intel_gpu_freq(rps, rps->efficient_freq),
intel_gpu_freq(rps, rps->boost_freq));
seq_printf(m, "Wait boosts: %d\n", atomic_read(&rps->boosts));
if (INTEL_GEN(i915) >= 6 && rps->enabled && gt->awake) {
struct intel_uncore *uncore = gt->uncore;
u32 rpup, rpupei;
u32 rpdown, rpdownei;
intel_uncore_forcewake_get(uncore, FORCEWAKE_ALL);
rpup = intel_uncore_read_fw(uncore, GEN6_RP_CUR_UP) & GEN6_RP_EI_MASK;
rpupei = intel_uncore_read_fw(uncore, GEN6_RP_CUR_UP_EI) & GEN6_RP_EI_MASK;
rpdown = intel_uncore_read_fw(uncore, GEN6_RP_CUR_DOWN) & GEN6_RP_EI_MASK;
rpdownei = intel_uncore_read_fw(uncore, GEN6_RP_CUR_DOWN_EI) & GEN6_RP_EI_MASK;
intel_uncore_forcewake_put(uncore, FORCEWAKE_ALL);
seq_printf(m, "\nRPS Autotuning (current \"%s\" window):\n",
rps_power_to_str(rps->power.mode));
seq_printf(m, " Avg. up: %d%% [above threshold? %d%%]\n",
rpup && rpupei ? 100 * rpup / rpupei : 0,
rps->power.up_threshold);
seq_printf(m, " Avg. down: %d%% [below threshold? %d%%]\n",
rpdown && rpdownei ? 100 * rpdown / rpdownei : 0,
rps->power.down_threshold);
} else {
seq_puts(m, "\nRPS Autotuning inactive\n");
}
return 0;
}
static bool rps_eval(const struct intel_gt *gt)
{
return HAS_RPS(gt->i915);
}
DEFINE_GT_DEBUGFS_ATTRIBUTE(rps_boost);
void debugfs_gt_pm_register(struct intel_gt *gt, struct dentry *root)
{
static const struct debugfs_gt_file files[] = {
{ "drpc", &drpc_fops, NULL },
{ "frequency", &frequency_fops, NULL },
{ "forcewake", &fw_domains_fops, NULL },
{ "llc", &llc_fops, llc_eval },
{ "rps_boost", &rps_boost_fops, rps_eval },
};
debugfs_gt_register_files(gt, root, files, ARRAY_SIZE(files));
}

Просмотреть файл

@ -0,0 +1,14 @@
/* SPDX-License-Identifier: MIT */
/*
* Copyright © 2019 Intel Corporation
*/
#ifndef DEBUGFS_GT_PM_H
#define DEBUGFS_GT_PM_H
struct intel_gt;
struct dentry;
void debugfs_gt_pm_register(struct intel_gt *gt, struct dentry *root);
#endif /* DEBUGFS_GT_PM_H */

Просмотреть файл

@ -28,6 +28,8 @@
#include "i915_drv.h"
#include "i915_trace.h"
#include "intel_gt_pm.h"
#include "intel_gt_requests.h"
static void irq_enable(struct intel_engine_cs *engine)
{
@ -53,15 +55,17 @@ static void irq_disable(struct intel_engine_cs *engine)
static void __intel_breadcrumbs_disarm_irq(struct intel_breadcrumbs *b)
{
struct intel_engine_cs *engine =
container_of(b, struct intel_engine_cs, breadcrumbs);
lockdep_assert_held(&b->irq_lock);
GEM_BUG_ON(!b->irq_enabled);
if (!--b->irq_enabled)
irq_disable(container_of(b,
struct intel_engine_cs,
breadcrumbs));
irq_disable(engine);
b->irq_armed = false;
intel_gt_pm_put_async(engine->gt);
}
void intel_engine_disarm_breadcrumbs(struct intel_engine_cs *engine)
@ -127,16 +131,23 @@ __dma_fence_signal__notify(struct dma_fence *fence,
}
}
void intel_engine_breadcrumbs_irq(struct intel_engine_cs *engine)
static void add_retire(struct intel_breadcrumbs *b, struct intel_timeline *tl)
{
struct intel_breadcrumbs *b = &engine->breadcrumbs;
struct intel_engine_cs *engine =
container_of(b, struct intel_engine_cs, breadcrumbs);
intel_engine_add_retire(engine, tl);
}
static void signal_irq_work(struct irq_work *work)
{
struct intel_breadcrumbs *b = container_of(work, typeof(*b), irq_work);
const ktime_t timestamp = ktime_get();
struct intel_context *ce, *cn;
struct list_head *pos, *next;
unsigned long flags;
LIST_HEAD(signal);
spin_lock_irqsave(&b->irq_lock, flags);
spin_lock(&b->irq_lock);
if (b->irq_armed && list_empty(&b->signalers))
__intel_breadcrumbs_disarm_irq(b);
@ -177,44 +188,41 @@ void intel_engine_breadcrumbs_irq(struct intel_engine_cs *engine)
if (!list_is_first(pos, &ce->signals)) {
/* Advance the list to the first incomplete request */
__list_del_many(&ce->signals, pos);
if (&ce->signals == pos) /* now empty */
if (&ce->signals == pos) { /* now empty */
list_del_init(&ce->signal_link);
add_retire(b, ce->timeline);
}
}
}
spin_unlock_irqrestore(&b->irq_lock, flags);
spin_unlock(&b->irq_lock);
list_for_each_safe(pos, next, &signal) {
struct i915_request *rq =
list_entry(pos, typeof(*rq), signal_link);
struct list_head cb_list;
spin_lock_irqsave(&rq->lock, flags);
spin_lock(&rq->lock);
list_replace(&rq->fence.cb_list, &cb_list);
__dma_fence_signal__timestamp(&rq->fence, timestamp);
__dma_fence_signal__notify(&rq->fence, &cb_list);
spin_unlock_irqrestore(&rq->lock, flags);
spin_unlock(&rq->lock);
i915_request_put(rq);
}
}
static void signal_irq_work(struct irq_work *work)
{
struct intel_engine_cs *engine =
container_of(work, typeof(*engine), breadcrumbs.irq_work);
intel_engine_breadcrumbs_irq(engine);
}
static void __intel_breadcrumbs_arm_irq(struct intel_breadcrumbs *b)
static bool __intel_breadcrumbs_arm_irq(struct intel_breadcrumbs *b)
{
struct intel_engine_cs *engine =
container_of(b, struct intel_engine_cs, breadcrumbs);
lockdep_assert_held(&b->irq_lock);
if (b->irq_armed)
return;
return true;
if (!intel_gt_pm_get_if_awake(engine->gt))
return false;
/*
* The breadcrumb irq will be disarmed on the interrupt after the
@ -234,6 +242,8 @@ static void __intel_breadcrumbs_arm_irq(struct intel_breadcrumbs *b)
if (!b->irq_enabled++)
irq_enable(engine);
return true;
}
void intel_engine_init_breadcrumbs(struct intel_engine_cs *engine)
@ -271,19 +281,20 @@ bool i915_request_enable_breadcrumb(struct i915_request *rq)
if (test_bit(I915_FENCE_FLAG_ACTIVE, &rq->fence.flags)) {
struct intel_breadcrumbs *b = &rq->engine->breadcrumbs;
struct intel_context *ce = rq->hw_context;
struct intel_context *ce = rq->context;
struct list_head *pos;
spin_lock(&b->irq_lock);
GEM_BUG_ON(test_bit(I915_FENCE_FLAG_SIGNAL, &rq->fence.flags));
__intel_breadcrumbs_arm_irq(b);
if (!__intel_breadcrumbs_arm_irq(b))
goto unlock;
/*
* We keep the seqno in retirement order, so we can break
* inside intel_engine_breadcrumbs_irq as soon as we've passed
* the last completed request (or seen a request that hasn't
* event started). We could iterate the timeline->requests list,
* inside intel_engine_signal_breadcrumbs as soon as we've
* passed the last completed request (or seen a request that
* hasn't event started). We could walk the timeline->requests,
* but keeping a separate signalers_list has the advantage of
* hopefully being much smaller than the full list and so
* provides faster iteration and detection when there are no
@ -306,6 +317,7 @@ bool i915_request_enable_breadcrumb(struct i915_request *rq)
GEM_BUG_ON(!check_signal_order(ce, rq));
set_bit(I915_FENCE_FLAG_SIGNAL, &rq->fence.flags);
unlock:
spin_unlock(&b->irq_lock);
}
@ -326,7 +338,7 @@ void i915_request_cancel_breadcrumb(struct i915_request *rq)
*/
spin_lock(&b->irq_lock);
if (test_bit(I915_FENCE_FLAG_SIGNAL, &rq->fence.flags)) {
struct intel_context *ce = rq->hw_context;
struct intel_context *ce = rq->context;
list_del(&rq->signal_link);
if (list_empty(&ce->signals))

Просмотреть файл

@ -31,8 +31,7 @@ void intel_context_free(struct intel_context *ce)
}
struct intel_context *
intel_context_create(struct i915_gem_context *ctx,
struct intel_engine_cs *engine)
intel_context_create(struct intel_engine_cs *engine)
{
struct intel_context *ce;
@ -40,7 +39,7 @@ intel_context_create(struct i915_gem_context *ctx,
if (!ce)
return ERR_PTR(-ENOMEM);
intel_context_init(ce, ctx, engine);
intel_context_init(ce, engine);
return ce;
}
@ -68,11 +67,8 @@ int __intel_context_do_pin(struct intel_context *ce)
if (err)
goto err;
GEM_TRACE("%s context:%llx pin ring:{head:%04x, tail:%04x}\n",
ce->engine->name, ce->timeline->fence_context,
ce->ring->head, ce->ring->tail);
i915_gem_context_get(ce->gem_context); /* for ctx->ppgtt */
CE_TRACE(ce, "pin ring:{head:%04x, tail:%04x}\n",
ce->ring->head, ce->ring->tail);
smp_mb__before_atomic(); /* flush pin before it is visible */
}
@ -98,12 +94,10 @@ void intel_context_unpin(struct intel_context *ce)
mutex_lock_nested(&ce->pin_mutex, SINGLE_DEPTH_NESTING);
if (likely(atomic_dec_and_test(&ce->pin_count))) {
GEM_TRACE("%s context:%llx retire\n",
ce->engine->name, ce->timeline->fence_context);
CE_TRACE(ce, "retire\n");
ce->ops->unpin(ce);
i915_gem_context_put(ce->gem_context);
intel_context_active_release(ce);
}
@ -113,13 +107,10 @@ void intel_context_unpin(struct intel_context *ce)
static int __context_pin_state(struct i915_vma *vma)
{
u64 flags;
unsigned int bias = i915_ggtt_pin_bias(vma) | PIN_OFFSET_BIAS;
int err;
flags = i915_ggtt_pin_bias(vma) | PIN_OFFSET_BIAS;
flags |= PIN_HIGH | PIN_GLOBAL;
err = i915_vma_pin(vma, 0, 0, flags);
err = i915_ggtt_pin(vma, 0, bias | PIN_HIGH);
if (err)
return err;
@ -144,9 +135,9 @@ static void __intel_context_retire(struct i915_active *active)
{
struct intel_context *ce = container_of(active, typeof(*ce), active);
GEM_TRACE("%s context:%llx retire\n",
ce->engine->name, ce->timeline->fence_context);
CE_TRACE(ce, "retire\n");
set_bit(CONTEXT_VALID_BIT, &ce->flags);
if (ce->state)
__context_unpin_state(ce->state);
@ -198,7 +189,7 @@ int intel_context_active_acquire(struct intel_context *ce)
return err;
/* Preallocate tracking nodes */
if (!i915_gem_context_is_kernel(ce->gem_context)) {
if (!intel_context_is_barrier(ce)) {
err = i915_active_acquire_preallocate_barrier(&ce->active,
ce->engine);
if (err) {
@ -219,30 +210,19 @@ void intel_context_active_release(struct intel_context *ce)
void
intel_context_init(struct intel_context *ce,
struct i915_gem_context *ctx,
struct intel_engine_cs *engine)
{
struct i915_address_space *vm;
GEM_BUG_ON(!engine->cops);
GEM_BUG_ON(!engine->gt->vm);
kref_init(&ce->ref);
ce->gem_context = ctx;
rcu_read_lock();
vm = rcu_dereference(ctx->vm);
if (vm)
ce->vm = i915_vm_get(vm);
else
ce->vm = i915_vm_get(&engine->gt->ggtt->vm);
rcu_read_unlock();
if (ctx->timeline)
ce->timeline = intel_timeline_get(ctx->timeline);
ce->engine = engine;
ce->ops = engine->cops;
ce->sseu = engine->sseu;
ce->ring = __intel_context_ring_size(SZ_16K);
ce->ring = __intel_context_ring_size(SZ_4K);
ce->vm = i915_vm_get(engine->gt->vm);
INIT_LIST_HEAD(&ce->signal_link);
INIT_LIST_HEAD(&ce->signals);
@ -307,30 +287,11 @@ int intel_context_prepare_remote_request(struct intel_context *ce,
int err;
/* Only suitable for use in remotely modifying this context */
GEM_BUG_ON(rq->hw_context == ce);
GEM_BUG_ON(rq->context == ce);
if (rcu_access_pointer(rq->timeline) != tl) { /* timeline sharing! */
/*
* Ideally, we just want to insert our foreign fence as
* a barrier into the remove context, such that this operation
* occurs after all current operations in that context, and
* all future operations must occur after this.
*
* Currently, the timeline->last_request tracking is guarded
* by its mutex and so we must obtain that to atomically
* insert our barrier. However, since we already hold our
* timeline->mutex, we must be careful against potential
* inversion if we are the kernel_context as the remote context
* will itself poke at the kernel_context when it needs to
* unpin. Ergo, if already locked, we drop both locks and
* try again (through the magic of userspace repeating EAGAIN).
*/
if (!mutex_trylock(&tl->mutex))
return -EAGAIN;
/* Queue this switch after current activity by this context. */
err = i915_active_fence_set(&tl->last_request, rq);
mutex_unlock(&tl->mutex);
if (err)
return err;
}

Просмотреть файл

@ -7,7 +7,9 @@
#ifndef __INTEL_CONTEXT_H__
#define __INTEL_CONTEXT_H__
#include <linux/bitops.h>
#include <linux/lockdep.h>
#include <linux/types.h>
#include "i915_active.h"
#include "intel_context_types.h"
@ -15,14 +17,19 @@
#include "intel_ring_types.h"
#include "intel_timeline_types.h"
#define CE_TRACE(ce, fmt, ...) do { \
const struct intel_context *ce__ = (ce); \
ENGINE_TRACE(ce__->engine, "context:%llx" fmt, \
ce__->timeline->fence_context, \
##__VA_ARGS__); \
} while (0)
void intel_context_init(struct intel_context *ce,
struct i915_gem_context *ctx,
struct intel_engine_cs *engine);
void intel_context_fini(struct intel_context *ce);
struct intel_context *
intel_context_create(struct i915_gem_context *ctx,
struct intel_engine_cs *engine);
intel_context_create(struct intel_engine_cs *engine);
void intel_context_free(struct intel_context *ce);
@ -153,4 +160,64 @@ static inline struct intel_ring *__intel_context_ring_size(u64 sz)
return u64_to_ptr(struct intel_ring, sz);
}
static inline bool intel_context_is_barrier(const struct intel_context *ce)
{
return test_bit(CONTEXT_BARRIER_BIT, &ce->flags);
}
static inline bool intel_context_use_semaphores(const struct intel_context *ce)
{
return test_bit(CONTEXT_USE_SEMAPHORES, &ce->flags);
}
static inline void intel_context_set_use_semaphores(struct intel_context *ce)
{
set_bit(CONTEXT_USE_SEMAPHORES, &ce->flags);
}
static inline void intel_context_clear_use_semaphores(struct intel_context *ce)
{
clear_bit(CONTEXT_USE_SEMAPHORES, &ce->flags);
}
static inline bool intel_context_is_banned(const struct intel_context *ce)
{
return test_bit(CONTEXT_BANNED, &ce->flags);
}
static inline bool intel_context_set_banned(struct intel_context *ce)
{
return test_and_set_bit(CONTEXT_BANNED, &ce->flags);
}
static inline bool
intel_context_force_single_submission(const struct intel_context *ce)
{
return test_bit(CONTEXT_FORCE_SINGLE_SUBMISSION, &ce->flags);
}
static inline void
intel_context_set_single_submission(struct intel_context *ce)
{
__set_bit(CONTEXT_FORCE_SINGLE_SUBMISSION, &ce->flags);
}
static inline bool
intel_context_nopreempt(const struct intel_context *ce)
{
return test_bit(CONTEXT_NOPREEMPT, &ce->flags);
}
static inline void
intel_context_set_nopreempt(struct intel_context *ce)
{
set_bit(CONTEXT_NOPREEMPT, &ce->flags);
}
static inline void
intel_context_clear_nopreempt(struct intel_context *ce)
{
clear_bit(CONTEXT_NOPREEMPT, &ce->flags);
}
#endif /* __INTEL_CONTEXT_H__ */

Просмотреть файл

@ -44,7 +44,7 @@ struct intel_context {
#define intel_context_inflight_count(ce) ptr_unmask_bits((ce)->inflight, 2)
struct i915_address_space *vm;
struct i915_gem_context *gem_context;
struct i915_gem_context __rcu *gem_context;
struct list_head signal_link;
struct list_head signals;
@ -54,7 +54,13 @@ struct intel_context {
struct intel_timeline *timeline;
unsigned long flags;
#define CONTEXT_ALLOC_BIT 0
#define CONTEXT_BARRIER_BIT 0
#define CONTEXT_ALLOC_BIT 1
#define CONTEXT_VALID_BIT 2
#define CONTEXT_USE_SEMAPHORES 3
#define CONTEXT_BANNED 4
#define CONTEXT_FORCE_SINGLE_SUBMISSION 5
#define CONTEXT_NOPREEMPT 6
u32 *lrc_reg_state;
u64 lrc_desc;

Просмотреть файл

@ -29,6 +29,13 @@ struct intel_gt;
#define CACHELINE_BYTES 64
#define CACHELINE_DWORDS (CACHELINE_BYTES / sizeof(u32))
#define ENGINE_TRACE(e, fmt, ...) do { \
const struct intel_engine_cs *e__ __maybe_unused = (e); \
GEM_TRACE("%s %s: " fmt, \
dev_name(e__->i915->drm.dev), e__->name, \
##__VA_ARGS__); \
} while (0)
/*
* The register defines to be used with the following macros need to accept a
* base param, e.g:
@ -177,15 +184,15 @@ void intel_engine_stop(struct intel_engine_cs *engine);
void intel_engine_cleanup(struct intel_engine_cs *engine);
int intel_engines_init_mmio(struct intel_gt *gt);
int intel_engines_setup(struct intel_gt *gt);
int intel_engines_init(struct intel_gt *gt);
void intel_engines_cleanup(struct intel_gt *gt);
void intel_engines_release(struct intel_gt *gt);
void intel_engines_free(struct intel_gt *gt);
int intel_engine_init_common(struct intel_engine_cs *engine);
void intel_engine_cleanup_common(struct intel_engine_cs *engine);
int intel_ring_submission_setup(struct intel_engine_cs *engine);
int intel_ring_submission_init(struct intel_engine_cs *engine);
int intel_engine_stop_cs(struct intel_engine_cs *engine);
void intel_engine_cancel_stop_cs(struct intel_engine_cs *engine);
@ -206,13 +213,11 @@ void intel_engine_fini_breadcrumbs(struct intel_engine_cs *engine);
void intel_engine_disarm_breadcrumbs(struct intel_engine_cs *engine);
static inline void
intel_engine_queue_breadcrumbs(struct intel_engine_cs *engine)
intel_engine_signal_breadcrumbs(struct intel_engine_cs *engine)
{
irq_work_queue(&engine->breadcrumbs.irq_work);
}
void intel_engine_breadcrumbs_irq(struct intel_engine_cs *engine);
void intel_engine_reset_breadcrumbs(struct intel_engine_cs *engine);
void intel_engine_fini_breadcrumbs(struct intel_engine_cs *engine);
@ -270,14 +275,14 @@ gen8_emit_ggtt_write(u32 *cs, u32 value, u32 gtt_offset, u32 flags)
static inline void __intel_engine_reset(struct intel_engine_cs *engine,
bool stalled)
{
if (engine->reset.reset)
engine->reset.reset(engine, stalled);
if (engine->reset.rewind)
engine->reset.rewind(engine, stalled);
engine->serial++; /* contexts lost */
}
bool intel_engines_are_idle(struct intel_gt *gt);
bool intel_engine_is_idle(struct intel_engine_cs *engine);
void intel_engine_flush_submission(struct intel_engine_cs *engine);
bool intel_engine_flush_submission(struct intel_engine_cs *engine);
void intel_engines_reset_default_submission(struct intel_gt *gt);
@ -296,7 +301,7 @@ ktime_t intel_engine_get_busy_time(struct intel_engine_cs *engine);
struct i915_request *
intel_engine_find_active_request(struct intel_engine_cs *engine);
u32 intel_engine_context_size(struct drm_i915_private *i915, u8 class);
u32 intel_engine_context_size(struct intel_gt *gt, u8 class);
#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)

Просмотреть файл

@ -141,7 +141,7 @@ static const struct engine_info intel_engines[] = {
/**
* intel_engine_context_size() - return the size of the context for an engine
* @dev_priv: i915 device private
* @gt: the gt
* @class: engine class
*
* Each engine class may require a different amount of space for a context
@ -153,17 +153,18 @@ static const struct engine_info intel_engines[] = {
* in LRC mode, but does not include the "shared data page" used with
* GuC submission. The caller should account for this if using the GuC.
*/
u32 intel_engine_context_size(struct drm_i915_private *dev_priv, u8 class)
u32 intel_engine_context_size(struct intel_gt *gt, u8 class)
{
struct intel_uncore *uncore = gt->uncore;
u32 cxt_size;
BUILD_BUG_ON(I915_GTT_PAGE_SIZE != PAGE_SIZE);
switch (class) {
case RENDER_CLASS:
switch (INTEL_GEN(dev_priv)) {
switch (INTEL_GEN(gt->i915)) {
default:
MISSING_CASE(INTEL_GEN(dev_priv));
MISSING_CASE(INTEL_GEN(gt->i915));
return DEFAULT_LR_CONTEXT_RENDER_SIZE;
case 12:
case 11:
@ -175,14 +176,14 @@ u32 intel_engine_context_size(struct drm_i915_private *dev_priv, u8 class)
case 8:
return GEN8_LR_CONTEXT_RENDER_SIZE;
case 7:
if (IS_HASWELL(dev_priv))
if (IS_HASWELL(gt->i915))
return HSW_CXT_TOTAL_SIZE;
cxt_size = I915_READ(GEN7_CXT_SIZE);
cxt_size = intel_uncore_read(uncore, GEN7_CXT_SIZE);
return round_up(GEN7_CXT_TOTAL_SIZE(cxt_size) * 64,
PAGE_SIZE);
case 6:
cxt_size = I915_READ(CXT_SIZE);
cxt_size = intel_uncore_read(uncore, CXT_SIZE);
return round_up(GEN6_CXT_TOTAL_SIZE(cxt_size) * 64,
PAGE_SIZE);
case 5:
@ -197,9 +198,9 @@ u32 intel_engine_context_size(struct drm_i915_private *dev_priv, u8 class)
* minimum allocation anyway so it should all come
* out in the wash.
*/
cxt_size = I915_READ(CXT_SIZE) + 1;
cxt_size = intel_uncore_read(uncore, CXT_SIZE) + 1;
DRM_DEBUG_DRIVER("gen%d CXT_SIZE = %d bytes [0x%08x]\n",
INTEL_GEN(dev_priv),
INTEL_GEN(gt->i915),
cxt_size * 64,
cxt_size - 1);
return round_up(cxt_size * 64, PAGE_SIZE);
@ -216,7 +217,7 @@ u32 intel_engine_context_size(struct drm_i915_private *dev_priv, u8 class)
case VIDEO_DECODE_CLASS:
case VIDEO_ENHANCEMENT_CLASS:
case COPY_ENGINE_CLASS:
if (INTEL_GEN(dev_priv) < 8)
if (INTEL_GEN(gt->i915) < 8)
return 0;
return GEN8_LR_CONTEXT_OTHER_SIZE;
}
@ -318,14 +319,7 @@ static int intel_engine_setup(struct intel_gt *gt, enum intel_engine_id id)
engine->props.timeslice_duration_ms =
CONFIG_DRM_I915_TIMESLICE_DURATION;
/*
* To be overridden by the backend on setup. However to facilitate
* cleanup on error during setup, we always provide the destroy vfunc.
*/
engine->destroy = (typeof(engine->destroy))kfree;
engine->context_size = intel_engine_context_size(gt->i915,
engine->class);
engine->context_size = intel_engine_context_size(gt, engine->class);
if (WARN_ON(engine->context_size > BIT(20)))
engine->context_size = 0;
if (engine->context_size)
@ -334,6 +328,7 @@ static int intel_engine_setup(struct intel_gt *gt, enum intel_engine_id id)
/* Nothing to do here, execute in order of dependencies */
engine->schedule = NULL;
ewma__engine_latency_init(&engine->latency);
seqlock_init(&engine->stats.lock);
ATOMIC_INIT_NOTIFIER_HEAD(&engine->context_status_notifier);
@ -344,7 +339,6 @@ static int intel_engine_setup(struct intel_gt *gt, enum intel_engine_id id)
gt->engine_class[info->class][info->instance] = engine;
gt->engine[id] = engine;
intel_engine_add_user(engine);
gt->i915->engine[id] = engine;
return 0;
@ -390,18 +384,36 @@ static void intel_setup_engine_capabilities(struct intel_gt *gt)
}
/**
* intel_engines_cleanup() - free the resources allocated for Command Streamers
* intel_engines_release() - free the resources allocated for Command Streamers
* @gt: pointer to struct intel_gt
*/
void intel_engines_cleanup(struct intel_gt *gt)
void intel_engines_release(struct intel_gt *gt)
{
struct intel_engine_cs *engine;
enum intel_engine_id id;
/* Decouple the backend; but keep the layout for late GPU resets */
for_each_engine(engine, gt, id) {
if (!engine->release)
continue;
engine->release(engine);
engine->release = NULL;
memset(&engine->reset, 0, sizeof(engine->reset));
gt->i915->engine[id] = NULL;
}
}
void intel_engines_free(struct intel_gt *gt)
{
struct intel_engine_cs *engine;
enum intel_engine_id id;
for_each_engine(engine, gt, id) {
engine->destroy(engine);
kfree(engine);
gt->engine[id] = NULL;
gt->i915->engine[id] = NULL;
}
}
@ -455,38 +467,7 @@ int intel_engines_init_mmio(struct intel_gt *gt)
return 0;
cleanup:
intel_engines_cleanup(gt);
return err;
}
/**
* intel_engines_init() - init the Engine Command Streamers
* @gt: pointer to struct intel_gt
*
* Return: non-zero if the initialization failed.
*/
int intel_engines_init(struct intel_gt *gt)
{
int (*init)(struct intel_engine_cs *engine);
struct intel_engine_cs *engine;
enum intel_engine_id id;
int err;
if (HAS_EXECLISTS(gt->i915))
init = intel_execlists_submission_init;
else
init = intel_ring_submission_init;
for_each_engine(engine, gt, id) {
err = init(engine);
if (err)
goto cleanup;
}
return 0;
cleanup:
intel_engines_cleanup(gt);
intel_engines_free(gt);
return err;
}
@ -601,7 +582,7 @@ err:
return ret;
}
static int intel_engine_setup_common(struct intel_engine_cs *engine)
static int engine_setup_common(struct intel_engine_cs *engine)
{
int err;
@ -631,49 +612,6 @@ static int intel_engine_setup_common(struct intel_engine_cs *engine)
return 0;
}
/**
* intel_engines_setup- setup engine state not requiring hw access
* @gt: pointer to struct intel_gt
*
* Initializes engine structure members shared between legacy and execlists
* submission modes which do not require hardware access.
*
* Typically done early in the submission mode specific engine setup stage.
*/
int intel_engines_setup(struct intel_gt *gt)
{
int (*setup)(struct intel_engine_cs *engine);
struct intel_engine_cs *engine;
enum intel_engine_id id;
int err;
if (HAS_EXECLISTS(gt->i915))
setup = intel_execlists_submission_setup;
else
setup = intel_ring_submission_setup;
for_each_engine(engine, gt, id) {
err = intel_engine_setup_common(engine);
if (err)
goto cleanup;
err = setup(engine);
if (err)
goto cleanup;
/* We expect the backend to take control over its state */
GEM_BUG_ON(engine->destroy == (typeof(engine->destroy))kfree);
GEM_BUG_ON(!engine->cops);
}
return 0;
cleanup:
intel_engines_cleanup(gt);
return err;
}
struct measure_breadcrumb {
struct i915_request rq;
struct intel_timeline timeline;
@ -757,13 +695,13 @@ create_kernel_context(struct intel_engine_cs *engine)
struct intel_context *ce;
int err;
ce = intel_context_create(engine->i915->kernel_context, engine);
ce = intel_context_create(engine);
if (IS_ERR(ce))
return ce;
ce->ring = __intel_context_ring_size(SZ_4K);
__set_bit(CONTEXT_BARRIER_BIT, &ce->flags);
err = intel_context_pin(ce);
err = intel_context_pin(ce); /* perma-pin so it is always available */
if (err) {
intel_context_put(ce);
return ERR_PTR(err);
@ -791,13 +729,19 @@ create_kernel_context(struct intel_engine_cs *engine)
*
* Returns zero on success or an error code on failure.
*/
int intel_engine_init_common(struct intel_engine_cs *engine)
static int engine_init_common(struct intel_engine_cs *engine)
{
struct intel_context *ce;
int ret;
engine->set_default_submission(engine);
ret = measure_breadcrumb_dw(engine);
if (ret < 0)
return ret;
engine->emit_fini_breadcrumb_dw = ret;
/*
* We may need to do things with the shrinker which
* require us to immediately switch back to the default
@ -812,18 +756,38 @@ int intel_engine_init_common(struct intel_engine_cs *engine)
engine->kernel_context = ce;
ret = measure_breadcrumb_dw(engine);
if (ret < 0)
goto err_unpin;
return 0;
}
engine->emit_fini_breadcrumb_dw = ret;
int intel_engines_init(struct intel_gt *gt)
{
int (*setup)(struct intel_engine_cs *engine);
struct intel_engine_cs *engine;
enum intel_engine_id id;
int err;
if (HAS_EXECLISTS(gt->i915))
setup = intel_execlists_submission_setup;
else
setup = intel_ring_submission_setup;
for_each_engine(engine, gt, id) {
err = engine_setup_common(engine);
if (err)
return err;
err = setup(engine);
if (err)
return err;
err = engine_init_common(engine);
if (err)
return err;
intel_engine_add_user(engine);
}
return 0;
err_unpin:
intel_context_unpin(ce);
intel_context_put(ce);
return ret;
}
/**
@ -836,6 +800,7 @@ err_unpin:
void intel_engine_cleanup_common(struct intel_engine_cs *engine)
{
GEM_BUG_ON(!list_empty(&engine->active.requests));
tasklet_kill(&engine->execlists.tasklet); /* flush the callback */
cleanup_status_page(engine);
@ -911,7 +876,7 @@ int intel_engine_stop_cs(struct intel_engine_cs *engine)
if (INTEL_GEN(engine->i915) < 3)
return -ENODEV;
GEM_TRACE("%s\n", engine->name);
ENGINE_TRACE(engine, "\n");
intel_uncore_write_fw(uncore, mode, _MASKED_BIT_ENABLE(STOP_RING));
@ -920,7 +885,7 @@ int intel_engine_stop_cs(struct intel_engine_cs *engine)
mode, MODE_IDLE, MODE_IDLE,
1000, stop_timeout(engine),
NULL)) {
GEM_TRACE("%s: timed out on STOP_RING -> IDLE\n", engine->name);
ENGINE_TRACE(engine, "timed out on STOP_RING -> IDLE\n");
err = -ETIMEDOUT;
}
@ -932,7 +897,7 @@ int intel_engine_stop_cs(struct intel_engine_cs *engine)
void intel_engine_cancel_stop_cs(struct intel_engine_cs *engine)
{
GEM_TRACE("%s\n", engine->name);
ENGINE_TRACE(engine, "\n");
ENGINE_WRITE_FW(engine, RING_MI_MODE, _MASKED_BIT_DISABLE(STOP_RING));
}
@ -1082,9 +1047,10 @@ static bool ring_is_idle(struct intel_engine_cs *engine)
return idle;
}
void intel_engine_flush_submission(struct intel_engine_cs *engine)
bool intel_engine_flush_submission(struct intel_engine_cs *engine)
{
struct tasklet_struct *t = &engine->execlists.tasklet;
bool active = tasklet_is_locked(t);
if (__tasklet_is_scheduled(t)) {
local_bh_disable();
@ -1095,10 +1061,13 @@ void intel_engine_flush_submission(struct intel_engine_cs *engine)
tasklet_unlock(t);
}
local_bh_enable();
active = true;
}
/* Otherwise flush the tasklet if it was running on another cpu */
tasklet_unlock_wait(t);
return active;
}
/**
@ -1478,6 +1447,10 @@ void intel_engine_dump(struct intel_engine_cs *engine,
drm_printf(m, "*** WEDGED ***\n");
drm_printf(m, "\tAwake? %d\n", atomic_read(&engine->wakeref.count));
drm_printf(m, "\tBarriers?: %s\n",
yesno(!llist_empty(&engine->barrier_tasks)));
drm_printf(m, "\tLatency: %luus\n",
ewma__engine_latency_read(&engine->latency));
rcu_read_lock();
rq = READ_ONCE(engine->heartbeat.systole);
@ -1517,9 +1490,9 @@ void intel_engine_dump(struct intel_engine_cs *engine,
print_request_ring(m, rq);
if (rq->hw_context->lrc_reg_state) {
if (rq->context->lrc_reg_state) {
drm_printf(m, "Logical Ring Context:\n");
hexdump(m, rq->hw_context->lrc_reg_state, PAGE_SIZE);
hexdump(m, rq->context->lrc_reg_state, PAGE_SIZE);
}
}
spin_unlock_irqrestore(&engine->active.lock, flags);
@ -1580,7 +1553,7 @@ int intel_enable_engine_stats(struct intel_engine_cs *engine)
for (port = execlists->pending; (rq = *port); port++) {
/* Exclude any contexts already counted in active */
if (!intel_context_inflight_count(rq->hw_context))
if (!intel_context_inflight_count(rq->context))
engine->stats.active++;
}

Просмотреть файл

@ -63,15 +63,15 @@ static void heartbeat(struct work_struct *wrk)
struct intel_context *ce = engine->kernel_context;
struct i915_request *rq;
if (!intel_engine_pm_get_if_awake(engine))
return;
rq = engine->heartbeat.systole;
if (rq && i915_request_completed(rq)) {
i915_request_put(rq);
engine->heartbeat.systole = NULL;
}
if (!intel_engine_pm_get_if_awake(engine))
return;
if (intel_gt_is_wedged(engine->gt))
goto out;
@ -215,18 +215,26 @@ out_rpm:
int intel_engine_flush_barriers(struct intel_engine_cs *engine)
{
struct i915_request *rq;
int err = 0;
if (llist_empty(&engine->barrier_tasks))
return 0;
if (!intel_engine_pm_get_if_awake(engine))
return 0;
rq = i915_request_create(engine->kernel_context);
if (IS_ERR(rq))
return PTR_ERR(rq);
if (IS_ERR(rq)) {
err = PTR_ERR(rq);
goto out_rpm;
}
idle_pulse(engine, rq);
i915_request_add(rq);
return 0;
out_rpm:
intel_engine_pm_put(engine);
return err;
}
#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)

Просмотреть файл

@ -6,6 +6,7 @@
#include "i915_drv.h"
#include "intel_context.h"
#include "intel_engine.h"
#include "intel_engine_heartbeat.h"
#include "intel_engine_pm.h"
@ -21,7 +22,7 @@ static int __engine_unpark(struct intel_wakeref *wf)
container_of(wf, typeof(*engine), wakeref);
void *map;
GEM_TRACE("%s\n", engine->name);
ENGINE_TRACE(engine, "\n");
intel_gt_pm_get(engine->gt);
@ -73,6 +74,15 @@ static inline void __timeline_mark_unlock(struct intel_context *ce,
#endif /* !IS_ENABLED(CONFIG_LOCKDEP) */
static void duration(struct dma_fence *fence, struct dma_fence_cb *cb)
{
struct i915_request *rq = to_request(fence);
ewma__engine_latency_add(&rq->engine->latency,
ktime_us_delta(rq->fence.timestamp,
rq->duration.emitted));
}
static void
__queue_and_release_pm(struct i915_request *rq,
struct intel_timeline *tl,
@ -80,7 +90,7 @@ __queue_and_release_pm(struct i915_request *rq,
{
struct intel_gt_timelines *timelines = &engine->gt->timelines;
GEM_TRACE("%s\n", engine->name);
ENGINE_TRACE(engine, "\n");
/*
* We have to serialise all potential retirement paths with our
@ -113,6 +123,8 @@ static bool switch_to_kernel_context(struct intel_engine_cs *engine)
unsigned long flags;
bool result = true;
GEM_BUG_ON(!intel_context_is_barrier(ce));
/* Already inside the kernel context, safe to power down. */
if (engine->wakeref_serial == engine->serial)
return true;
@ -163,7 +175,18 @@ static bool switch_to_kernel_context(struct intel_engine_cs *engine)
/* Install ourselves as a preemption barrier */
rq->sched.attr.priority = I915_PRIORITY_BARRIER;
__i915_request_commit(rq);
if (likely(!__i915_request_commit(rq))) { /* engine should be idle! */
/*
* Use an interrupt for precise measurement of duration,
* otherwise we rely on someone else retiring all the requests
* which may delay the signaling (i.e. we will likely wait
* until the background request retirement running every
* second or two).
*/
BUILD_BUG_ON(sizeof(rq->duration) > sizeof(rq->submitq));
dma_fence_add_callback(&rq->fence, &rq->duration.cb, duration);
rq->duration.emitted = ktime_get();
}
/* Expose ourselves to the world */
__queue_and_release_pm(rq, ce->timeline, engine);
@ -183,7 +206,7 @@ static void call_idle_barriers(struct intel_engine_cs *engine)
container_of((struct list_head *)node,
typeof(*cb), node);
cb->func(NULL, cb);
cb->func(ERR_PTR(-EAGAIN), cb);
}
}
@ -204,7 +227,7 @@ static int __engine_park(struct intel_wakeref *wf)
if (!switch_to_kernel_context(engine))
return -EBUSY;
GEM_TRACE("%s\n", engine->name);
ENGINE_TRACE(engine, "\n");
call_idle_barriers(engine); /* cleanup after wedging */

Просмотреть файл

@ -7,6 +7,7 @@
#ifndef INTEL_ENGINE_PM_H
#define INTEL_ENGINE_PM_H
#include "i915_request.h"
#include "intel_engine_types.h"
#include "intel_wakeref.h"
@ -41,6 +42,26 @@ static inline void intel_engine_pm_flush(struct intel_engine_cs *engine)
intel_wakeref_unlock_wait(&engine->wakeref);
}
static inline struct i915_request *
intel_engine_create_kernel_request(struct intel_engine_cs *engine)
{
struct i915_request *rq;
/*
* The engine->kernel_context is special as it is used inside
* the engine-pm barrier (see __engine_park()), circumventing
* the usual mutexes and relying on the engine-pm barrier
* instead. So whenever we use the engine->kernel_context
* outside of the barrier, we must manually handle the
* engine wakeref to serialise with the use inside.
*/
intel_engine_pm_get(engine);
rq = i915_request_create(engine->kernel_context);
intel_engine_pm_put(engine);
return rq;
}
void intel_engine_init__pm(struct intel_engine_cs *engine);
#endif /* INTEL_ENGINE_PM_H */

Просмотреть файл

@ -7,6 +7,7 @@
#ifndef __INTEL_ENGINE_TYPES__
#define __INTEL_ENGINE_TYPES__
#include <linux/average.h>
#include <linux/hashtable.h>
#include <linux/irq_work.h>
#include <linux/kref.h>
@ -119,6 +120,9 @@ enum intel_engine_id {
#define INVALID_ENGINE ((enum intel_engine_id)-1)
};
/* A simple estimator for the round-trip latency of an engine */
DECLARE_EWMA(_engine_latency, 6, 4)
struct st_preempt_hang {
struct completion completion;
unsigned int count;
@ -316,6 +320,13 @@ struct intel_engine_cs {
struct intel_timeline *timeline;
} legacy;
/*
* We track the average duration of the idle pulse on parking the
* engine to keep an estimate of the how the fast the engine is
* under ideal conditions.
*/
struct ewma__engine_latency latency;
/* Rather than have every client wait upon all user interrupts,
* with the herd waking after every interrupt and each doing the
* heavyweight seqno dance, we delegate the task (of being the
@ -389,7 +400,10 @@ struct intel_engine_cs {
struct {
void (*prepare)(struct intel_engine_cs *engine);
void (*reset)(struct intel_engine_cs *engine, bool stalled);
void (*rewind)(struct intel_engine_cs *engine, bool stalled);
void (*cancel)(struct intel_engine_cs *engine);
void (*finish)(struct intel_engine_cs *engine);
} reset;
@ -439,15 +453,7 @@ struct intel_engine_cs {
void (*schedule)(struct i915_request *request,
const struct i915_sched_attr *attr);
/*
* Cancel all requests on the hardware, or queued for execution.
* This should only cancel the ready requests that have been
* submitted to the engine (via the engine->submit_request callback).
* This is called when marking the device as wedged.
*/
void (*cancel_requests)(struct intel_engine_cs *engine);
void (*destroy)(struct intel_engine_cs *engine);
void (*release)(struct intel_engine_cs *engine);
struct intel_engine_execlists execlists;

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше