drm/i915: unpin backing storage in dmabuf_unmap

This fixes a WARN in i915_gem_free_object when the
obj->pages_pin_count isn't 0.

v2: Add locking to unmap, noticed by Chris Wilson. Note that even
though we call unmap with our own dev->struct_mutex held that won't
result in an immediate deadlock since we never go through the dma_buf
interfaces for our own, reimported buffers. But it's still easy to
blow up and anger lockdep, but that's already the case with our ->map
implementation. Fixing this for real will involve per dma-buf ww mutex
locking by the callers. And lots of fun. So go with the duct-tape
approach for now.

Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reported-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Cc: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Tested-by: Armin K. <krejzi@email.com> (v1)
Tested-by: Dave Airlie <airlied@redhat.com>
Acked-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@gmail.com>
This commit is contained in:
Daniel Vetter 2013-08-08 09:10:37 +02:00 коммит произвёл Dave Airlie
Родитель d2b2c08456
Коммит eb91626ac4
1 изменённых файлов: 8 добавлений и 0 удалений

Просмотреть файл

@ -85,9 +85,17 @@ static void i915_gem_unmap_dma_buf(struct dma_buf_attachment *attachment,
struct sg_table *sg, struct sg_table *sg,
enum dma_data_direction dir) enum dma_data_direction dir)
{ {
struct drm_i915_gem_object *obj = attachment->dmabuf->priv;
mutex_lock(&obj->base.dev->struct_mutex);
dma_unmap_sg(attachment->dev, sg->sgl, sg->nents, dir); dma_unmap_sg(attachment->dev, sg->sgl, sg->nents, dir);
sg_free_table(sg); sg_free_table(sg);
kfree(sg); kfree(sg);
i915_gem_object_unpin_pages(obj);
mutex_unlock(&obj->base.dev->struct_mutex);
} }
static void i915_gem_dmabuf_release(struct dma_buf *dma_buf) static void i915_gem_dmabuf_release(struct dma_buf *dma_buf)