Apparently stuff works that way on those machines.
I agree with Chris' concern that this is a bit risky but imo worth a
shot in -next just for fun. Afaics all these machines have the pci
resources allocated like that by the BIOS, so I suspect that it's all
ok.
This regression goes back to
commit eaba1b8f33
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Thu Jul 4 12:28:35 2013 +0100
drm/i915: Verify that our stolen memory doesn't conflict
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=76983
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=71031
Tested-by: lu hua <huax.lu@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Tested-by: Paul Menzel <paulepanter@users.sourceforge.net>
Cc: stable@vger.kernel.org
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
If these flags are on the object level it will be more difficult to allow
for multiple VMAs per object.
v2: Simplification and cleanup after code review comments (Chris Wilson).
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Based upon a patch from Deepak, but reworked to only apply on gen7+
and with the logic a bit clarified.
v2: Fix s/SHIFT/MASK/ fumble that Ville spotted.
Cc: Deepak S <deepak.s@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
On most gen2-4 platforms the GTT can be (or maybe always is?)
inside the stolen memory region. If that's the case, reduce the
size of the stolen memory appropriately to make make sure we
don't clobber the GTT.
v2: Deal with gen4 36 bit physical address
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=80151
Acked-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: stable@vger.kernel.org
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The GEN FBC unit provides the ability to set a low pass on frames it
attempts to compress. If a frame is less than a certain amount
compressibility (2:1, 4:1) it will not bother. This allows the driver to
reduce the size it requests out of stolen memory.
Unluckily, a few months ago, Ville actually began using this feature for
framebuffers that are 16bpp (not sure why not 8bpp). In those cases, we
are already using this mechanism for a different purpose, and so we can
only achieve one further level of compression (2:1 -> 4:1)
FBC GEN1, ie. pre-G45 is ignored.
The cleverness of the patch is Art's. The bugs are mine.
v2: Update message and including missing threshold case 3 (Spotted by Arthur).
Cc: Art Runyan <arthur.j.runyan@intel.com>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Right now, there is no threshold (0 means fail, 1 means 1:1 compression
limit). This is to split the function/non-functional change of the next
patch.
The next patch will start to attempt to reduce the amount of CFB space
we need for dire situations. It will be contained within this function.
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
We are already using the size to determine whether or not to free the
object, so there is no functional change there. Almost everything else
has changed to static allocations of the drm_mm_node too.
Aside from bringing this inline with much of our other code, this makes
error paths slightly simpler, which benefits the look of an upcoming
patch.
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Now that we have a release hook into i915_gem_object_free, we can move
the explicit call to the internal stolen function and hook it up
throught the callback instead.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We want future generations to at least attempt to use all features, so
restrict the stolen memory disabling when vt-d is enabled to the
latest generation we have reports for. Which is a HSW per the original
report.
Also once we get a bit a hold of some of the mysterious framebuffer in
stolen memory issues that still haunt bugzilla, we should probably
drop this hack again and see what happens.
This was introduced in
commit 0f4706d274
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Tue Mar 18 14:50:50 2014 +0200
drm/i915: Disable stolen memory when DMAR is active
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Jani Nikula <jani.nikula@intel.com>
Cc: David Woodhouse <dwmw2@infradead.org>
References: https://bugs.freedesktop.org/show_bug.cgi?id=68535
Acked-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We have reports of heavy screen corruption if we try to use the stolen
memory reserved by the BIOS whilst the DMA-Remapper is active. This
quirk may be only specific to a few machines or BIOSes, but first lets
apply the big hammer and always disable use of stolen memory when DMAR
is active.
v2 by Jani: Rebase on -fixes, only look at intel_iommu_gfx_mapped.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=68535
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: stable@vger.kernel.org
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
There is a conflict seen when requesting the kernel to reserve
the physical space used for the stolen area. This is because
some BIOS are wrapping the stolen area in the root PCI bus, but have
an off-by-one error. As a workaround we retry the reservation with an
offset of 1 instead of 0.
v2: updated commit message & the comment in source file (Daniel)
Signed-off-by: Akash Goel <akash.goel@intel.com>
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Tested-by: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
The 'offset' field of the 'scatterlist' structure was wrongly
programmed with the offset value from the base of stolen area,
whereas this field indicates the offset from where the interested
data starts within the first PAGE pointed to by 'scattterlist'
structure. As a result when a new GEM object allocated from stolen
area is mapped to GTT, it could lead to an overwrite of GTT entries
as the page count calculation will go wrong, refer the function
'sg_page_count'.
v2: Modified the commit message. (Chris)
Signed-off-by: Akash Goel <akash.goel@intel.com>
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Cc: stable@vger.kernel.org
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=71908
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=69104
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
But only when we indeed set up a gtt mapping. We need this since the
vma also holds a pages_pin_count, on top of the unconditional
pages_pin_count we grab for all stolen objects (to avoid swap-out).
This should avoid a pages_pin_count underrun when cleaning up
framebuffers objects taken over from the BIOS.
Chris mentioned in his review that this bug even predates the vma
conversion.
Reported-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Cc: Jesse Barnes <jbarnes@virtuousgeek.org>
Cc: Ben Widawsky <benjamin.widawsky@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.14 (GNU/Linux)
iQEcBAABAgAGBQJSQMORAAoJEHm+PkMAQRiGj14H/1bjhtfNjPdX7MVQAzA+WpwX
s7h1IQu2Si9S5S1lBiM2sBTOssVcmfheO9x4yqm7JNOD1RnssWKOM3q+zVOLstwd
GD3gluJPeraD5EyYSqEJ9ILPQ3gbxb4wOlT0Z291TW6E8XhLRr0RTOJPksRsgvLH
Ckm9uJh6ArS6ZXfXiaDQfd+xHAQJkUfW6nMSA0g9ZO9C6KIDRvcbUmrY3m4HhfIk
mK0TXCBs+AXGDIjTEB8JgIQL/5y1Qn0c4R+2uTU/4YWwyLvJTV1e44kGoleukMMT
6Pw/TNlUEN161dbSaqCyF3sfXHDYQ5valycI2PDgitMtPSxbzsU1VDizS8+daRg=
=lEmF
-----END PGP SIGNATURE-----
Merge tag 'v3.12-rc2' into drm-intel-next
Backmerge Linux 3.12-rc2 to prep for a bunch of -next patches:
- Header cleanup in intel_drv.h, both changed in -fixes and my current
-next pile.
- Cursor handling cleanup for -next which depends upon the cursor
handling fix merged into -rc2.
All just trivial conflicts of the "changed adjacent lines" type:
drivers/gpu/drm/i915/i915_gem.c
drivers/gpu/drm/i915/intel_display.c
drivers/gpu/drm/i915/intel_drv.h
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Paulo reported that if he set the amount of reserved memory to 0, then
we emitted a warning about a conflict before disabling our use of stolen
memory. This was introduced with
commit eaba1b8f33
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Thu Jul 4 12:28:35 2013 +0100
drm/i915: Verify that our stolen memory doesn't conflict
and is simply fixed by checking for a no reservation first.
Reported-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
In the execbuf code we don't clean up any vmas which ended up not
getting bound for code simplicity. To make sure that we don't end up
creating multiple vma for the same vm kill the somewhat dangerous
vma_create function and inline it into lookup_or_create.
This is just a safety measure to prevent surprises in the future.
Also update the somewhat confused comment in the execbuf code and
clarify what kind of magic is going on with a new one.
v2: Keep the function separate as requested by Chris. But give it a __
prefix for paranoia and move it tighter together with the other vma
stuff.
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Ben Widawsky <ben@bwidawsk.net>
Acked-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Need to get my stuff out the door ;-) Highlights:
- pc8+ support from Paulo
- more vma patches from Ben.
- Kconfig option to enable preliminary support by default (Josh
Triplett)
- Optimized cpu cache flush handling and support for write-through caching
of display planes on Iris (Chris)
- rc6 tuning from Stéphane Marchesin for more stability
- VECS seqno wrap/semaphores fix (Ben)
- a pile of smaller cleanups and improvements all over
Note that I've ditched Ben's execbuf vma conversion for 3.12 since not yet
ready. But there's still other vma conversion stuff in here.
* tag 'drm-intel-next-2013-08-23' of git://people.freedesktop.org/~danvet/drm-intel: (62 commits)
drm/i915: Print seqnos as unsigned in debugfs
drm/i915: Fix context size calculation on SNB/IVB/VLV
drm/i915: Use POSTING_READ in lcpll code
drm/i915: enable Package C8+ by default
drm/i915: add i915.pc8_timeout function
drm/i915: add i915_pc8_status debugfs file
drm/i915: allow package C8+ states on Haswell (disabled)
drm/i915: fix SDEIMR assertion when disabling LCPLL
drm/i915: grab force_wake when restoring LCPLL
drm/i915: drop WaMbcDriverBootEnable workaround
drm/i915: Cleaning up the relocate entry function
drm/i915: merge HSW and SNB PM irq handlers
drm/i915: fix how we mask PMIMR when adding work to the queue
drm/i915: don't queue PM events we won't process
drm/i915: don't disable/reenable IVB error interrupts when not needed
drm/i915: add dev_priv->pm_irq_mask
drm/i915: don't update GEN6_PMIMR when it's not needed
drm/i915: wrap GEN6_PMIMR changes
drm/i915: wrap GTIMR changes
drm/i915: add the FCLK case to intel_ddi_get_cdclk_freq
...
Use the standard inversely ordered goto label stack for everything.
Spotted while reviewing place where we might need to to call
vma_destroy but failed to do so.
Cc: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Daniel writes:
New pile of stuff for -next:
- Cleanup of the old crtc helper callbacks, all encoders are now converted
to the i915 modeset infrastructure.
- Massive amount of wm patches from Ville for ilk, snb, ivb, hsw, this is
prep work to eventually get things going for nuclear pageflips where we
need to adjust watermarks on the fly.
- More vm/vma patches from Ben. This refactoring isn't yet fully rolled
out, we miss the execbuf conversion and some of the low-level
bind/unbind support code.
- Convert our hdmi infoframe code to use the new common helper functions
(Damien). This contains some bugfixes for the common infoframe helpers.
- Some cruft removal from Damien.
- Various smaller bits&pieces all over, as usual.
* tag 'drm-intel-next-2013-08-09' of git://people.freedesktop.org/~danvet/drm-intel: (105 commits)
drm/i915: Fix FB WM for HSW
drm/i915: expose HDMI connectors on port C on BYT
drm/i915: fix a limit check in hsw_compute_wm_results()
drm/i915: unbreak i915_gem_object_ggtt_unbind()
drm/i915: Make intel_set_mode() static
drm/i915: Remove intel_modeset_disable()
drm/i915: Make intel_encoder_dpms() static
drm/i915: Make i915_hangcheck_elapsed() static
drm/i915: Fix #endif comment
drm/i915: Remove i915_gem_object_check_coherency()
drm/i915: Remove stale prototypes
drm/i915: List objects allocated from stolen memory in debugfs
drm/i915: Always call intel_update_sprite_watermarks() when disabling a plane
drm/i915: Pass plane and crtc to intel_update_sprite_watermarks
drm/i915: Don't try to disable plane if it's already disabled
drm/i915: Pass crtc to our update/disable_plane hooks
drm/i915: Split plane watermark parameters into a separate struct
drm/i915: Pull some watermarks state into a separate structure
drm/i915: Calculate max watermark levels for ILK+
drm/i915: Rename hsw_lp_wm_result to intel_wm_level
...
As a corollary to reviewing the interaction between LLC and our cache
domains, the GPU PTE bits are independent of the CPU PAT bits. As such
we can set the cache level on stolen memory based on how we wish the GPU
to cache accesses to it. So we are free to set the same default cache
levels as for normal bo, i.e. enable LLC cacheing by default where
appropriate.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
formerly: "drm/i915: Create VMAs (part 5) - move mm_list"
The mm_list is used for the active/inactive LRUs. Since those LRUs are
per address space, the link should be per VMx .
Because we'll only ever have 1 VMA before this point, it's not incorrect
to defer this change until this point in the patch series, and doing it
here makes the change much easier to understand.
Shamelessly manipulated out of Daniel:
"active/inactive stuff is used by eviction when we run out of address
space, so needs to be per-vma and per-address space. Bound/unbound otoh
is used by the shrinker which only cares about the amount of memory used
and not one bit about in which address space this memory is all used in.
Of course to actual kick out an object we need to unbind it from every
address space, but for that we have the per-object list of vmas."
v2: only bump GGTT LRU in i915_gem_object_set_to_gtt_domain (Chris)
v3: Moved earlier in the series
v4: Add dropped message from v3
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Frob patch to apply and use vma->node.size directly as
discused with Ben. Also drop a needles BUG_ON before move_to_inactive,
the function itself has the same check.]
[danvet 2nd: Rebase on top of the lost "drm/i915: Cleanup more of VMA
in destroy", specifically unlink the vma from the mm_list in
vma_unbind (to keep it symmetric with bind_to_vm) instead of
vma_destroy.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Neat that QA (and Ben) keeps on humming along while I'm on vacation, so
you already get the next feature pull request:
- proper eLLC support for HSW from Ben
- more interrupt refactoring
- add w/a tags where we implement them already (Damien)
- hangcheck fixes (Chris) + hangcheck stats (Mika)
- flesh out the new vm structs for ppgtt and ggtt (Ben)
- PSR for Haswell, still disabled by default (Rodrigo et al.)
- pc8+ refclock sequence code from Paulo
- more interrupt refactoring from Paulo, unifying ilk/snb with the ivb/hsw
interrupt code
- full solution for the Haswell concurrent reg access issues (Chris)
- fix racy object accounting, used by some new leak tests
- fix sync polarity settings on ch7xxx dvo encoder
- random bits&pieces, little fixes and better debug output all over
[airlied: fix conflict with drm_mm cleanups]
* tag 'drm-intel-next-2013-07-26-fixed' of git://people.freedesktop.org/~danvet/drm-intel: (289 commits)
drm/i915: Do not dereference NULL crtc or fb until after checking
drm/i915: fix pnv display core clock readout out
drm/i915: Replace open-coded offset_in_page()
drm/i915: Retry DP aux_ch communications with a different clock after failure
drm/i915: Add messages useful for HPD storm detection debugging (v2)
drm/i915: dvo_ch7xxx: fix vsync polarity setting
drm/i915: fix the racy object accounting
drm/i915: Convert the register access tracepoint to be conditional
drm/i915: Squash gen lookup through multiple indirections inside GT access
drm/i915: Use the common register access functions for NOTRACE variants
drm/i915: Use a private interface for register access within GT
drm/i915: Colocate all GT access routines in the same file
drm/i915: fix reference counting in i915_gem_create
drm/i915: Use Graphics Base of Stolen Memory on all gen3+
drm/i915: disable stolen mem for OVERLAY_NEEDS_PHYSICAL
drm/i915: add functions to disable and restore LCPLL
drm/i915: disable CLKOUT_DP when it's not needed
drm/i915: extend lpt_enable_clkout_dp
drm/i915: fix up error cleanup in i915_gem_object_bind_to_gtt
drm/i915: Add some debug breadcrumbs to connector detection
...
i915 is the last user of the weird search+get_block drm_mm API. Convert it
to an explicit kmalloc()+insert_node(). This drops the last user of the
node-cache in drm_mm. We can remove it now in a follow-up patch.
v2:
- simplify error path in i915_setup_compression()
v3:
- simplify error path even more
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Add a "best_match" flag similar to the drm_mm_search_*() helpers so we
can convert TTM to use them in follow up patches. We can also inline the
non-generic helpers and move them into the header to allow compile-time
optimizations.
To make calls to drm_mm_{search,insert}_node() more readable, this
converts the boolean argument to a flagset. There are pending patches that
add additional flags for top-down allocators and more.
v2:
- use flag parameter instead of boolean "best_match"
- convert *_search_free() helpers to also use flags argument
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Just some small cleanups, and a rename of vm->ggtt_vm requested by
Daniel.
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This backmerges Linus' merge commit of the latest drm-fixes pull:
commit 549f3a1218
Merge: 42577ca058ca4a
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date: Tue Jul 23 15:47:08 2013 -0700
Merge branch 'drm-fixes' of git://people.freedesktop.org/~airlied/linux
We've accrued a few too many conflicts, but the real reason is that I
want to merge the 100% solution for Haswell concurrent registers
writes into drm-intel-next. But that depends upon the 90% bandaid
merged into -fixes:
commit a7cd1b8fea
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Fri Jul 19 20:36:51 2013 +0100
drm/i915: Serialize almost all register access
Also, we can roll up on accrued conflicts.
Usually I'd backmerge a tagged -rc, but I want to get this done before
heading off to vacations next week ;-)
Conflicts:
drivers/gpu/drm/i915/i915_dma.c
drivers/gpu/drm/i915/i915_gem.c
v2: For added hilarity we have a init sequence conflict around the
gt_lock, so need to move that one, too. Spotted by Jani Nikula.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
So I made the mistake of missing that the desktop and mobile chipsets
have different layouts in their PCI configurations, and we were
incorrectly setting the wrong physical address for stolen memory on
mobile chipsets.
Since all gen3+ are actually consistent in the location of the GBSM
register in the PCI configuration space on device 2 (the GPU), use it.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
[danvet: Drop cc: stable and fudge conflicts.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
drm_gem_object_init() and drm_gem_private_object_init() do exactly the
same (except for shmem alloc) so make the first use the latter to reduce
code duplication.
Also drop the return code from drm_gem_private_object_init(). It seems
unlikely that we will extend it any time soon so no reason to keep it
around. This simplifies code paths in drivers, too.
Last but not least, fix gma500 to call drm_gem_object_release() before
freeing objects that were allocated via drm_gem_private_object_init().
That isn't actually necessary for now, but might be in the future.
Signed-off-by: David Herrmann <dh.herrmann@gmail.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Patrik Jakobsson <patrik.r.jakobsson@gmail.com>
Acked-by: Rob Clark <robdclark@gmail.com>
Signed-off-by: Dave Airlie <airlied@gmail.com>
i915_gem_vma_create() returns and ERR_PTR() or a valid pointer, it never
returns NULL.
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Highlights:
- follow-up refactoring after the shared dpll rework that landed in 3.11
- oddball prep cleanups from Ben for ppgtt
- encoder->get_config state tracking infrastructure from Jesse
- used by the experimental fastboot support from Jesse (disabled by
default)
- make the error state file official and add it to our sysfs interface
(Mika)
- drm_mm prep changes from Ben, prepares to embedd the drm_mm_node (which
will be used by the vma rework later on)
- interrupt handling rework, follow up cleanups to the VECS enabling, hpd
storm handling and fifo underrun reporting.
- Big pile of smaller cleanups, code improvements and related stuff.
* tag 'drm-intel-next-2013-07-12' of git://people.freedesktop.org/~danvet/drm-intel: (72 commits)
drm/i915: clear DPLL reg when disabling i9xx dplls
drm/i915: Fix up cpt pixel multiplier enable sequence
drm/i915: clean up vlv ->pre_pll_enable and pll enable sequence
drm/i915: move error state to own compilation unit
drm/i915: Don't attempt to read an unitialized stack value
drm/i915: Use for_each_pipe() when possible
drm/i915: don't enable PM_VEBOX_CS_ERROR_INTERRUPT
drm/i915: unify ring irq refcounts (again)
drm/i915: kill dev_priv->rps.lock
drm/i915: queue work outside spinlock in hsw_pm_irq_handler
drm/i915: streamline hsw_pm_irq_handler
drm/i915: irq handlers don't need interrupt-safe spinlocks
drm/i915: kill lpt pch transcoder->crtc mapping code for fifo underruns
drm/i915: improve GEN7_ERR_INT clearing for fifo underrun reporting
drm/i915: improve SERR_INT clearing for fifo underrun reporting
drm/i915: extract ibx_display_interrupt_update
drm/i915: remove unused members from drm_i915_private
drm/i915: don't frob mm.suspended when not using ums
drm/i915: Fix VLV DP RBR/HDMI/DAC PLL LPF coefficients
drm/i915: WARN if the bios reserved range is bigger than stolen size
...
Conflicts:
drivers/gpu/drm/i915/i915_gem.c
Formerly: "drm/i915: Create VMAs (part 1)"
In a previous patch, the notion of a VM was introduced. A VMA describes
an area of part of the VM address space. A VMA is similar to the concept
in the linux mm. However, instead of representing regular memory, a VMA
is backed by a GEM BO. There may be many VMAs for a given object, one
for each VM the object is to be used in. This may occur through flink,
dma-buf, or a number of other transient states.
Currently the code depends on only 1 VMA per object, for the global GTT
(and aliasing PPGTT). The following patches will address this and make
the rest of the infrastructure more suited
v2: s/i915_obj/i915_gem_obj (Chris)
v3: Only move an object to the now global unbound list if there are no
more VMAs for the object which are bound into a VM (ie. the list is
empty).
v4: killed obj->gtt_space
some reworks due to rebase
v5: Free vma on error path (Imre)
v6: Another missed vma free in i915_gem_object_bind_to_gtt error path
(Imre)
Fixed vma freeing in stolen preallocation (Imre)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Reviewed-by: Imre Deak <imre.deak@intel.com>
[danvet: Squash in fixup from Ben to not deref a non-existing vma in
set_cache_level, reported by Chris.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The odds of this happening are *extremely* unlikely.
Reported-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Shamelessly manipulated out of Daniel :-)
"When moving the lists around explain that the active/inactive stuff is
used by eviction when we run out of address space, so needs to be
per-vma and per-address space. Bound/unbound otoh is used by the
shrinker which only cares about the amount of memory used and not one
bit about in which address space this memory is all used in. Of course
to actual kick out an object we need to unbind it from every address
space, but for that we have the per-object list of vmas."
v2: Leave the bound list as a global one. (Chris, indirectly)
v3: Rebased with no i915_gtt_vm. In most places I added a new *vm local,
since it will eventually be replaces by a vm argument.
Put comment back inline, since it no longer makes sense to do otherwise.
v4: Rebased on hangcheck/error state movement
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Reviewed-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Every address space should support object allocation. It therefore makes
sense to have the allocator be part of the "superclass" which GGTT and
PPGTT will derive.
Since our maximum address space size is only 2GB we're not yet able to
avoid doing allocation/eviction; but we'd hope one day this becomes
almost irrelvant.
v2: Rebased
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Reviewed-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
v2: Bail out if we hit the WARN_ON to avoid fallout later on. Spotted
by Chris Wilson.
Suggested-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Sanity check that the memory region found through the Graphics Base
of Stolen Memory is reserved and hidden from the rest of the system
through the use of the resource API.
v2: "Graphics Stolen Memory" is such a more bodacious name than the lame
"i915 stolen", and convert to using devres for automagical cleanup of
the resource. (danvet)
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
[danvet: Dump proper hexcodes.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Embedding the node in the obj is more natural in the transition to VMAs
which will also have embedded nodes. This change also helps transition
away from put_block to remove node.
Though it's quite an uncommon occurrence, it's somewhat convenient to not
fail at bind time because we cannot allocate the node. Though in
practice there are other allocations (like the request structure) which
would probably make this point not terribly useful.
Quoting Daniel:
Note that the only difference between put_block and remove_node is
that the former fills up the preallocation cache. Which we don't need
anyway and hence is just wasted space.
v2: Clean up the stolen preallocation code.
Rebased on the reserve_node patches
renames ggtt_ stuff to gtt_ stuff
WARN_ON if the object is already bound (which doesn't mean it's in the
bound list, tricky)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
With the getters in place from the previous patch this members serves no
purpose other than saving one spare pointer chase, which will be killed
in the next patch anyway.
Moving to VMAs, this members adds unnecessary confusion since an object
may exist at different offsets in different VMs.
v2: Properly preserve the stolen offset. This code is a bit hacky but it
all goes away when we embed the drm_mm_node and removes the need for the
incorrect patch I submitted previously: "Use gtt_space->start for stolen
reservation"
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
With the previous patch we no longer actually create a node, we simply
find the correct hole and occupy it. This very well could have been
squashed with the last patch, but since I already had David's review, I
figured it's easiest to keep it distinct.
Also update the users in i915. Conveniently this is the only user of the
interface.
CC: David Airlie <airlied@linux.ie>
CC: <dri-devel@lists.freedesktop.org>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Acked-by: David Airlie <airlied@linux.ie>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
For an upcoming patch where we introduce the i915 VMA, it's ideal to
have the drm_mm_node as part of the VMA struct (ie. it's pre-allocated).
Part of the conversion to VMAs is to kill off obj->gtt_space. Doing this
will break a bunch of code, but amongst them are 2 callers of
drm_mm_create_block(), both related to stolen memory.
It also allows us to embed the drm_mm_node into the object currently
which provides a nice transition over to the new code.
v2: Reordered to do before ripping out obj->gtt_offset.
Some minor cleanups made available because of reordering.
v3: s/continue/break on failed stolen node allocation (David)
Set obj->gtt_space on failed node allocation (David)
Only unref stolen (fix double free) on failed create_stolen (David)
Free node, and NULL it in failed create_stolen (David)
Add back accidentally removed newline (David)
CC: <dri-devel@lists.freedesktop.org>
Reviewed-by: David Herrmann <dh.herrmann@gmail.com>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Acked-by: David Airlie <airlied@linux.ie>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
A magic -1 is a obscure, especially since it's actually passed as an
unsigned, so depends upon the magic sign extension rules in C. This has
been added in
commit 3727d55e4d
Author: Jesse Barnes <jbarnes@virtuousgeek.org>
Date: Wed May 8 10:45:14 2013 -0700
drm/i915: allow stolen, pre-allocated objects to avoid GTT allocation v2
Use a proper #define instead. Spotted while reviewing Ben's
drm_mm_create_block changes.
v2: Cast the constant to u32 since otherwise we again have a type
mismatch. Suggested by Chris Wilson.
Cc: Ben Widawsky <ben@bwidawsk.net>
Cc: Jesse Barnes <jbarnes@virtuousgeek.org>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Every other place properly checks whether we've managed to set
up the stolen allocator at boot-up properly, with the exception
of the cleanup code. Which results in an ugly
*ERROR* Memory manager not clean. Delaying takedown
at module unload time since the drm_mm isn't initialized at all.
v2: While at it check whether the stolen drm_mm is initialized instead
of the more obscure stolen_base == 0 check.
v3: Fix up the logic. Also we need to keep the stolen_base check in
i915_gem_object_create_stolen_for_preallocated since that can be
called before stolen memory is fully set up. Spotted by Chris Wilson.
v4: Readd the conversion in i915_gem_object_create_stolen_for_preallocated,
the check is for the dev_priv->mm.gtt_space drm_mm, the stolen
allocatot must already be initialized when calling that function (if
we indeed have stolen memory).
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=65953
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Tested-by: lu hua <huax.lu@intel.com> (v3)
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Resolve conflict with Damien's FBC_CHIP_DEFAULT no fbc
reason.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Since it will be used for the global bound/unbound list with full PPGTT,
this helps clarify things for upcoming code rework.
Recommended-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
In some cases, we may not need GTT address space allocated to a stolen
object, so allow passing -1 to the preallocated function to indicate as
much.
v2: remove BUG_ON(gtt_offset & 4095) now that -1 is allowed (Ville)
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
But we need to get the right stolen base and make pre-allocated objects
for BIOS stuff so we don't clobber it. If the BIOS hasn't allocated a
power context, we allocate one here too, from stolen space as required
by the docs.
v2: fix stolen to phys if ladder (Ben)
keep BIOS reserved space out of allocator altogether (Ben)
v3: fix mask of stolen base (Ben)
v4: clean up preallocated object on unload (Ben)
don't zero reg on unload (Jesse)
fix mask harder (Jesse)
v5: use unref for freeing stolen bits (Chris)
move alloc/free to intel_pm.c (Chris)
v6: NULL pctx at disable time so error paths work (Ben)
v7: use correct PCI device for config read (Jesse)
Reviewed-by: Ben Widawsky <benjamin.widawsky@intel.com>
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Instead of repeatedly bombarding the user with a request to reboot and
increase the stolen size with every fb refresh, just inform them the
first time only.
v2: Rearrange code so the hint to increase the amount of memory stolen
by the BIOS is only emitted if we fail to find sufficient stolen memory
for FBC.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
[danvet: Fixup formatting code mismatch that gcc spotted.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Since for_each_sg_page supports already memory w/o backing pages we can
revert the corresponding workaround.
This reverts commit 5bd4687e57.
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>