drm/i915: Protect gen7 irq_seqno_barrier with uncore lock

Faced with sporadic machine hangs on gen7, that mimic the issue of
concurrent writes to the same cacheline and seem to start with
commit 9b9ed30936 (drm/i915: Remove forcewake dance from seqno/irq
barrier on legacy gen6+), let us restore the spinlock around the mmio
read.

Fixes: 9b9ed30936 (drm/i915: Remove forcewake dance from seqno/irq...)
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1461744121-27051-1-git-send-email-chris@chris-wilson.co.uk
Tested-by: Mika Kuoppala <mika.kuoppala@intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
(cherry picked from commit bcbdb6d011)
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
This commit is contained in:
Chris Wilson 2016-04-27 09:02:01 +01:00 коммит произвёл Jani Nikula
Родитель 5fbd0418ee
Коммит e32da7adaf
1 изменённых файлов: 7 добавлений и 1 удалений

Просмотреть файл

@ -1573,6 +1573,8 @@ pc_render_add_request(struct drm_i915_gem_request *req)
static void
gen6_seqno_barrier(struct intel_engine_cs *engine)
{
struct drm_i915_private *dev_priv = engine->dev->dev_private;
/* Workaround to force correct ordering between irq and seqno writes on
* ivb (and maybe also on snb) by reading from a CS register (like
* ACTHD) before reading the status page.
@ -1584,9 +1586,13 @@ gen6_seqno_barrier(struct intel_engine_cs *engine)
* the write time to land, but that would incur a delay after every
* batch i.e. much more frequent than a delay when waiting for the
* interrupt (with the same net latency).
*
* Also note that to prevent whole machine hangs on gen7, we have to
* take the spinlock to guard against concurrent cacheline access.
*/
struct drm_i915_private *dev_priv = engine->dev->dev_private;
spin_lock_irq(&dev_priv->uncore.lock);
POSTING_READ_FW(RING_ACTHD(engine->mmio_base));
spin_unlock_irq(&dev_priv->uncore.lock);
}
static u32