Reported by Coverity. The intention is that the return value is
checked, but let's be more paranoid and make it extremely obvious
if something forgets to.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
We treat other plane updates in the same fashion. Spotted because
Rodrigo kept reporting a bug in the PSR code where the frontbuffer was
eternally stuck with a dirty cursor bit set.
The psr testcase should have caught this, but that i-g-t is kaputt.
Rodrigo is signed up to fix that.
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Tested-by-and-Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
If we receive a storm of requests for the same context (see gem_storedw_loop_*)
we might end up iterating over too many elements in interrupt time, looking for
contexts to squash together. Instead, share the burden by giving more
intelligence to the queue function. At most, the interrupt will iterate over
three elements.
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
[danvet: Checkpatch.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
In the current Execlists feeding mechanism, full preemption is not
supported yet: only lite-restores are allowed (this is: the GPU
simply samples a new tail pointer for the context currently in
execution).
But we have identified an scenario in which a full preemption occurs:
1) We submit two contexts for execution (A & B).
2) The GPU finishes with the first one (A), switches to the second one
(B) and informs us.
3) We submit B again (hoping to cause a lite restore) together with C,
but in the time we spend writing to the ELSP, the GPU finishes B.
4) The GPU start executing B again (since we told it so).
5) We receive a B finished interrupt and, mistakenly, we submit C (again)
and D, causing a full preemption of B.
The race is avoided by keeping track of how many times a context has been
submitted to the hardware and by better discriminating the received context
switch interrupts: in the example, when we have submitted B twice, we won´t
submit C and D as soon as we receive the notification that B is completed
because we were expecting to get a LITE_RESTORE and we didn´t, so we know a
second completion will be received shortly.
Without this explicit checking, somehow, the batch buffer execution order
gets messed with. This can be verified with the IGT test I sent together with
the series. I don´t know the exact mechanism by which the pre-emption messes
with the execution order but, since other people is working on the Scheduler
+ Preemption on Execlists, I didn´t try to fix it. In these series, only Lite
Restores are supported (other kind of preemptions WARN).
v2: elsp_submitted belongs in the new intel_ctx_submit_request. Several
rebase changes.
v3: Clarify how the race is avoided, as requested by Daniel.
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
[danvet: Align function parameters ...]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Handle all context status events in the context status buffer on every
context switch interrupt. We only remove work from the execlist queue
after a context status buffer reports that it has completed and we only
attempt to schedule new contexts on interrupt when a previously submitted
context completes (unless no contexts are queued, which means the GPU is
free).
We canot call intel_runtime_pm_get() in an interrupt (or with a spinlock
grabbed, FWIW), because it might sleep, which is not a nice thing to do.
Instead, do the runtime_pm get/put together with the create/destroy request,
and handle the forcewake get/put directly.
Signed-off-by: Thomas Daniel <thomas.daniel@intel.com>
v2: Unreferencing the context when we are freeing the request might free
the backing bo, which requires the struct_mutex to be grabbed, so defer
unreferencing and freeing to a bottom half.
v3:
- Ack the interrupt inmediately, before trying to handle it (fix for
missing interrupts by Bob Beckett <robert.beckett@intel.com>).
- Update the Context Status Buffer Read Pointer, just in case (spotted
by Damien Lespiau).
v4: New namespace and multiple rebase changes.
v5: Squash with "drm/i915/bdw: Do not call intel_runtime_pm_get() in an
interrupt", as suggested by Daniel.
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
[danvet: Checkpatch ...]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Context switch (and execlist submission) should happen only when
other contexts are not active, otherwise pre-emption occurs.
To assure this, we place context switch requests in a queue and those
request are later consumed when the right context switch interrupt is
received (still TODO).
v2: Use a spinlock, do not remove the requests on unqueue (wait for
context switch completion).
Signed-off-by: Thomas Daniel <thomas.daniel@intel.com>
v3: Several rebases and code changes. Use unique ID.
v4:
- Move the queue/lock init to the late ring initialization.
- Damien's kmalloc review comments: check return, use sizeof(*req),
do not cast.
v5:
- Do not reuse drm_i915_gem_request. Instead, create our own.
- New namespace.
Signed-off-by: Michel Thierry <michel.thierry@intel.com> (v1)
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com> (v2-v5)
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
[davnet: Checkpatch + wash-up s/BUG_ON/WARN_ON/.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Each logical ring context has the tail pointer in the context object,
so update it before submission.
v2: New namespace.
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
A context switch occurs by submitting a context descriptor to the
ExecList Submission Port. Given that we can now initialize a context,
it's possible to begin implementing the context switch by creating the
descriptor and submitting it to ELSP (actually two, since the ELSP
has two ports).
The context object must be mapped in the GGTT, which means it must exist
in the 0-4GB graphics VA range.
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
v2: This code has changed quite a lot in various rebases. Of particular
importance is that now we use the globally unique Submission ID to send
to the hardware. Also, context pages are now pinned unconditionally to
GGTT, so there is no need to bind them.
v3: Use LRCA[31:12] as hwCtxId[19:0]. This guarantees that the HW context
ID we submit to the ELSP is globally unique and != 0 (Bspec requirements
of the software use-only bits of the Context ID in the Context Descriptor
Format) without the hassle of the previous submission Id construction.
Also, re-add the ELSP porting read (it was dropped somewhere during the
rebases).
v4:
- Squash with "drm/i915/bdw: Add forcewake lock around ELSP writes" (BSPEC
says: "SW must set Force Wakeup bit to prevent GT from entering C6 while
ELSP writes are in progress") as noted by Thomas Daniel
(thomas.daniel@intel.com).
- Rename functions and use an execlists/intel_execlists_ namespace.
- The BUG_ON only checked that the LRCA was <32 bits, but it didn't make
sure that it was properly aligned. Spotted by Alistair Mcaulay
<alistair.mcaulay@intel.com>.
v5:
- Improved source code comments as suggested by Chris Wilson.
- No need to abstract submit_ctx away, as pointed by Brad Volkin.
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
[danvet: Checkpatch. Sigh.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
On a previous iteration of this patch, I created an Execlists
version of __i915_add_request and asbtracted it away as a
vfunc. Daniel Vetter wondered then why that was needed:
"with the clean split in command submission I expect every
function to know wether it'll submit to an lrc (everything in
intel_lrc.c) or wether it'll submit to a legacy ring (existing
code), so I don't see a need for an add_request vfunc."
The honest, hairy truth is that this patch is the glue keeping
the whole logical ring puzzle together:
- i915_add_request is used by intel_ring_idle, which in turn is
used by i915_gpu_idle, which in turn is used in several places
inside the eviction and gtt codes.
- Also, it is used by i915_gem_check_olr, which is littered all
over i915_gem.c
- ...
If I were to duplicate all the code that directly or indirectly
uses __i915_add_request, I'll end up creating a separate driver.
To show the differences between the existing legacy version and
the new Execlists one, this time I have special-cased
__i915_add_request instead of adding an add_request vfunc. I
hope this helps to untangle this Gordian knot.
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
[danvet: Adjust to ringbuf->FIXME_lrc_ctx per the discussion with
Thomas Daniel.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Static analysers find it 'suspicious', that we're trying to allocate memory for
elements of size sizeof(struct drm_fb_helper_connector) when the array is
defined as struct drm_fb_helper_connector **.
Use sizeof(struct drm_fb_helper_connector *) instead.
Note that the structure being defined as:
struct drm_fb_helper_connector {
struct drm_connector *connector;
};
This was still doing the right thing, but may not in the future if
additional fields are added.
Cc: Todd Previte <tprevite@gmail.com>
Cc: Dave Airlie <airlied@redhat.com>
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Static analysis will be unhappy if a function can theoretically return
0 and we're trying to divide by that value.
Mark that case that cannot occur as a BUG() instead.
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Bunch of small leftovers spotted by looking at the make htmldocs output.
I've left out dp mst, there's too much amiss there.
v2: Also add the missing parameter docbook in the dp mst code - Dave
Airlie correctly pointed out that we don't actually want kerneldoc for
the missing structure members in header files.
Cc: Dave Airlie <airlied@gmail.com>
Reviewed-by: Matt Roper <matthew.d.roper@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The execlist patches have a bit a convoluted and long history and due
to that have the actual submission still misplaced deeply burried in
the low-level ringbuffer handling code. This design goes back to the
legacy ringbuffer code with its tricky lazy request and simple work
submissiion using ring tail writes. For that reason they need a
ring->ctx backpointer.
The goal is to unburry that code and move it up into a level where the
full execlist context is available so that we can ditch this
backpointer. Until that's done make it really obvious that there's
work still to be done.
Cc: Oscar Mateo <oscar.mateo@intel.com>
Cc: Thomas Daniel <thomas.daniel@intel.com>
Acked-by: Thomas Daniel <thomas.daniel@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The current error state harks back to the era of just a single VM. For
full-ppgtt, we capture every bo on every VM. It behoves us to then print
every bo for every VM, which we currently fail to do and so miss vital
information in the error state.
v2: Use the vma address rather than -1!
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
On VLV, post S0i3 during i915_drm_thaw following issue is observed during ring
initialization.
[ 335.604039] [drm:stop_ring] ERROR render ring :timed out trying to stop ring
[ 336.607340] [drm:stop_ring] ERROR render ring :timed out trying to stop ring
[ 336.607345] [drm:init_ring_common] ERROR failed to set render ring head to zero ctl 00000000 head 00000000 tail 00000000 start 00000000
[ 337.610645] [drm:stop_ring] ERROR bsd ring :timed out trying to stop ring
[ 338.613952] [drm:stop_ring] ERROR bsd ring :timed out trying to stop ring
[ 338.613956] [drm:init_ring_common] ERROR failed to set bsd ring head to zero ctl 00000000 head 00000000 tail 00000000 start 00000000
[ 339.617256] [drm:stop_ring] ERROR render ring :timed out trying to stop ring
[ 339.617258] -----------[ cut here ]-----------
[ 339.617267] WARNING: CPU: 0 PID: 6 at drivers/gpu/drm/i915/intel_ringbuffer.c:1666 intel_cleanup_ring+0xe6/0xf0()
[ 339.617396] --[ end trace 5ef5ed1a3c92e2a6 ]--
[ 339.617428] [drm:__i915_drm_thaw] ERROR failed to re-initialize GPU, declaring wedged!
This is happening since wake is not enabled and Gunit registers are not restored.
For this system suspend/resume paths need to follow save/restore and additional
platform specific setup in suspend_complete and resume_prepare.
suspend_complete is shared unconditionaly for VLV, HSW, BDW. resume_prepare for
HSW and BDW has pc8 disabling which is needed during thaw_early so sharing
uncondtionally. For VLV and SNB runtime resume specific sequence exists.
Cc: Imre Deak <imre.deak@intel.com>
Cc: Paulo Zanoni <paulo.r.zanoni@intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Jani Nikula <jani.nikula@linux.intel.com>
Cc: Goel, Akash <akash.goel@intel.com>
Signed-off-by: Sagar Kamble <sagar.a.kamble@intel.com>
Reviewed-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
With this change, intel_runtime_suspend and intel_runtime_resume functions
become completely platform agnostic. Platform specific suspend/resume
changes are moved to intel_suspend_complete and intel_resume_prepare.
Cc: Imre Deak <imre.deak@intel.com>
Cc: Paulo Zanoni <paulo.r.zanoni@intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Jani Nikula <jani.nikula@linux.intel.com>
Cc: Goel, Akash <akash.goel@intel.com>
Signed-off-by: Sagar Kamble <sagar.a.kamble@intel.com>
Reviewed-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Rather than take and release the console_lock() around a non-existent
DRM_I915_FBDEV, move the lock acquisation into the callee where it will
be compiled out by the config option entirely. This includes moving the
deferred fb_set_suspend() dance and encapsulating it entirely within
intel_fbdev.c.
v2: Use an integral work item so that we can explicitly flush the work
upon suspend/unload.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
[danvet: Add the flush_work in fbdev_fini per the mailing list
discussion. And s/BUG_ON/WARN_ON/ because.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Ville pointed out the GCCism __builtin_types_compatible_p() that we
could use to replace our heavily casted presumption __I915__ macro that
was based on comparing struct sizes.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
845/865 support different cursor sizes as well, albeit a bit differently
than later platforms. Add the necessary code to make them work.
Untested due to lack of hardware.
v2: Warn but accept invalid stride (Chris)
Rewrite the cursor size checks for other platforms (Chris)
v3: More polish and magic to the cursor size checks (Chris)
v4: Moar polish and a comment (Chris)
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Ever since
commit 5efb3e2838
Author: Ville Syrjälä <ville.syrjala@linux.intel.com>
Date: Wed Apr 9 13:28:53 2014 +0300
drm/i915/chv: Add cursor pipe offsets
the only difference between i9xx_update_cursor() and ivb_update_cursor()
was the hsw+ pipe csc handling. Let's unify them and we can rid
outselves of some duplicated code.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
CURSIZE register exists on 845/865 only, so move it to
i845_update_cursor(). Changes to cursor size must be done only when the
cursor is disabled, so do the write just before enabling the cursor.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Make sure the cursor gets fully clipped when enabling it on a disabled
crtc via setplane. This will prevent the lower level code from
attempting to enable the cursor in hardware.
Cc: Paulo Zanoni <przanoni@gmail.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Also remove related WARN_ONs which seem to have been hit since a rather
long time. But apperently no one noticed since our module reload is
already WARNING-infested :(
Reviewed-by: Michel Thierry <michel.thierry@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We want to move the aliasing ppgtt cleanup back into the global
gtt cleanup code for symmetry, but first we need to create such
a place.
Reviewed-by: Michel Thierry <michel.thierry@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Address space cleanup isn't really a job for the low-level cleanup
callbacks. Without this change we can't reuse the low-level cleanup
callback for the aliasing ppgtt cleanup.
Reviewed-by: Michel Thierry <michel.thierry@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Now that all the flow is streamlined the rule is simple: We create
a new ppgtt for a new context when we have full ppgtt enabled.
Reviewed-by: Michel Thierry <michel.thierry@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
There's a bit a confusion since we track the global gtt,
the aliasing and real ppgtt in the ctx->vm pointer. And not
all callers really bother to check for the different cases and just
presume that it points to a real ppgtt.
Now looking closely we don't actually need ->vm to always point at an
address space - the only place that cares actually has fixup code
already to decide whether to look at the per-proces or the global
address space.
So switch to just tracking the ppgtt directly and ditch all the
extraneous code.
v2: Fixup the ppgtt debugfs file to not oops on a NULL ctx->ppgtt.
Also drop the early exit - without aliasing ppgtt we want to dump all
the ppgtts of the contexts if we have full ppgtt.
v3: Actually git add the compile fix.
Reviewed-by: Michel Thierry <michel.thierry@intel.com>
Cc: "Thierry, Michel" <michel.thierry@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
OTC-Jira: VIZ-3724
[danvet: Resolve conflicts with execlist patches while applying.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Stuffing this into the context setup code doesn't make a lot of sense.
Also reusing the real ppgtt setup code makes even less sense since the
aliasing ppgtt isn't a real address space. Leaving all that stuff
unitialized will make sure that we catch any abusers promptly.
This is also a prep work to clean up the context->ppgtt link.
v2: Fix up the logic fail, I've fumbled it so badly to completely
disable ppgtt on gen6. Spotted by Ville and Michel. Also move around
the pde write into the gen6 init function, since otherwise it won't
work at all.
v3: Only initialize the aliasing ppgtt when we actually enable it.
Cc: "Thierry, Michel" <michel.thierry@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Michel Thierry <michel.thierry@intel.com>
[danvet: Squash in fixup from Fengguang Wu.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Currently we abuse the aliasing ppgtt to set up the ppgtt support in
general. Which is a bit backwards since with full ppgtt we don't ever
need the aliasing ppgtt.
So untangle this and separate the ppgtt init from the aliasing
ppgtt. While at it drag it out of the context enabling (which just
does a switch to the default context).
Note that we still have the differentiation between synchronous and
asynchronous ppgtt setup, but that will soon vanish. So also correctly
wire up the return value handling to be prepared for when ->switch_mm
drops the synchronous parameter and could start to fail.
Reviewed-by: Michel Thierry <michel.thierry@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
A subsequent patch will no longer initialize the aliasing ppgtt if we
have full ppgtt enabled, since we simply don't need that any more.
Unfortunately a few places check for the aliasing ppgtt instead of
checking for ppgtt in general. Fix them up.
One special case are the gtt offset and size macros, which have some
code to remap the aliasing ppgtt to the global gtt. The aliasing ppgtt
is _not_ a logical address space, so passing that in as the vm is
plain and simple a bug. So just WARN about it and carry on - we have a
gracefully fall-through anyway if we can't find the vma.
Reviewed-by: Michel Thierry <michel.thierry@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We already needs this just as a safety check in case the preallocation
reservation dance fails. But we definitely need this to be able to
move tha aliasing ppgtt setup back out of the context code to this
place, where it belongs.
Reviewed-by: Michel Thierry <michel.thierry@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Stuff in headers really aught to have this.
Reviewed-by: Michel Thierry <michel.thierry@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This essentially unbreaks non-ppgtt operation where we'd scribble over
random memory.
While at it give the vm_to_ppgtt function a proper prefix and make it
a bit more paranoid.
Reviewed-by: Michel Thierry <michel.thierry@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Hardware contexts reference a ppgtt, not the other way round. And the
only user of this (in debugfs) actually only cares about which file
the ppgtt is associated with. So give it what it wants.
While at it give the ppgtt create function a proper name&place.
Reviewed-by: Michel Thierry <michel.thierry@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We should prefer `struct pci_device_id` over `DEFINE_PCI_DEVICE_TABLE` to
meet kernel coding style guidelines. This issue was reported by checkpatch.
A simplified version of the semantic patch that makes this change is as
follows (http://coccinelle.lip6.fr/):
// <smpl>
@@
identifier i;
declarer name DEFINE_PCI_DEVICE_TABLE;
initializer z;
@@
- DEFINE_PCI_DEVICE_TABLE(i)
+ const struct pci_device_id i[]
= z;
// </smpl>
[bhelgaas: add semantic patch]
Signed-off-by: Benoit Taine <benoit.taine@lip6.fr>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
So when reviewing Michel's patch I've noticed a few things and cleaned
them up:
- The early checks in ppgtt_release are now redundant: The inactive
list should always be empty now, so we can ditch these checks. Even
for the aliasing ppgtt (though that's a different confusion) since
we tear that down after all the objects are gone.
- The ppgtt handling functions are splattered all over. Consolidate
them in i915_gem_gtt.c, give them OCD prefixes and add wrappers for
get/put.
- There was a bit a confusion in ppgtt_release about whether it cares
about the active or inactive list. It should care about them both,
so augment the WARNINGs to check for both.
There's still create_vm_for_ctx left to do, put that is blocked on the
removal of ppgtt->ctx. Once that's done we can rename it to
i915_ppgtt_create and move it to its siblings for handling ppgtts.
v2: Move the ppgtt checks into the inline get/put functions as
suggested by Chris.
v3: Inline the now redundant ppgtt local variable.
Cc: Michel Thierry <michel.thierry@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Michel Thierry <michel.thierry@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
VMAs should take a reference of the address space they use.
Now, when the fd is closed, it will release the ref that the context was
holding, but it will still be referenced by any vmas that are still
active.
ppgtt_release() should then only be called when the last thing referencing
it releases the ref, and it can just call the base cleanup and free the
ppgtt.
Note that with this we will extend the lifetime of ppgtts which
contain shared objects. But all the non-shared objects will get
removed as soon as they drop of the active list and for the shared
ones the shrinker can eventually reap them. Since we currently can't
evict ppgtt pagetables either I don't think that temporary leak is
important.
Signed-off-by: Michel Thierry <michel.thierry@intel.com>
[danvet: Add note about potential ppgtt leak with this approach.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The normal flip function places things in the ring in the legacy
way, so we either fix that or force MMIO flips always as we do in
this patch.
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
[danvet: Checkpatch. Fucking again.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This is what i915_gem_do_execbuffer calls when it wants to execute some
worload in an Execlists world.
v2: Check arguments before doing stuff in intel_execlists_submission. Also,
get rel_constants parsing right.
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
[danvet: Drop the chipset flush, that's pre-gen6. And appease
checkpatch a bit .... again!]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We need to attend context switch interrupts from all rings. Also, fixed writing
IMR/IER and added HWSTAM at ring init time.
Notice that, if added to irq_enable_mask, the context switch interrupts would
be incorrectly masked out when the user interrupts are due to no users waiting
on a sequence number. Therefore, this commit adds a bitmask of interrupts to
be kept unmasked at all times.
v2: Disable HWSTAM, as suggested by Damien (nobody listens to these interrupts,
anyway).
v3: Add new get/put_irq functions.
Signed-off-by: Thomas Daniel <thomas.daniel@intel.com> (v1)
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com> (v2 & v3)
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
[danvet: Drop the GEN8_ prefix from the context switch interrupt
define and move it to its brethren.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This is a hard one, since there is no direct hardware ring to
control when in Execlists.
We reuse intel_ring_idle here, but it should be fine as long
as i915_add_request does the ring thing.
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Same as the legacy-style ring->flush.
v2: The BSD invalidate bit still exists in GEN8! Add it for the VCS
rings (but still consolidate the blt and bsd ring flushes into one).
This was noticed by Brad Volkin.
v3: The command for BSD and for other rings is slightly different:
get it exactly the same as in gen6_ring_flush + gen6_bsd_ring_flush
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
[danvet: Checkpatch.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Very similar to the legacy add_request, only modified to account for
logical ringbuffer.
v2: Use MI_GLOBAL_GTT, as suggested by Brad Volkin.
v3: Unify render and non-render in the same function, as noticed by
Brad Volkin.
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Well, new-ish: if all this code looks familiar, that's because it's
a clone of the existing submission mechanism (with some modifications
here and there to adapt it to LRCs and Execlists).
And why did we do this instead of reusing code, one might wonder?
Well, there are some fears that the differences are big enough that
they will end up breaking all platforms.
Also, Execlists offer several advantages, like control over when the
GPU is done with a given workload, that can help simplify the
submission mechanism, no doubt. I am interested in getting Execlists
to work first and foremost, but in the future this parallel submission
mechanism will help us to fine tune the mechanism without affecting
old gens.
v2: Pass the ringbuffer only (whenever possible).
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
[danvet: Appease checkpatch. Again. And drop the legacy sarea gunk
that somehow crept in.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Don't print raw numbers, use port_name() and tell the user whether it's
long or short without having to figure out what the other magic number
means.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
No mistery here: the seqno is still retrieved from the engine's
HW status page (the one in the default context. For the moment,
I see no reason to worry about other context's HWS page).
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
It needs to be anonymous memory (no file mappings)
and we are requried to install an MMU notifier.
Signed-off-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Whenever userspace mapping related to our userptr change
we wait for it to become idle and unmap it from GTT.
v2: rebased, fix mutex unlock in error path
v3: improve commit message
Signed-off-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
This way we test userptr availability at BO creation time instead of first use.
Signed-off-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Avoid problems with writeback by limiting userptr to anonymous memory.
v2: add commit and code comments
Signed-off-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
This patch adds an IOCTL for turning a pointer supplied by
userspace into a buffer object.
It imposes several restrictions upon the memory being mapped:
1. It must be page aligned (both start/end addresses, i.e ptr and size).
2. It must be normal system memory, not a pointer into another map of IO
space (e.g. it must not be a GTT mmapping of another object).
3. The BO is mapped into GTT, so the maximum amount of memory mapped at
all times is still the GTT limit.
4. The BO is only mapped readonly for now, so no write support.
5. List of backing pages is only acquired once, so they represent a
snapshot of the first use.
Exporting and sharing as well as mapping of buffer objects created by
this function is forbidden and results in an -EPERM.
v2: squash all previous changes into first public version
v3: fix tabs, map readonly, don't use MM callback any more
v4: set TTM_PAGE_FLAG_SG so that TTM never messes with the pages,
pin/unpin pages on bind/unbind instead of populate/unpopulate
v5: rebased on 3.17-wip, IOCTL renamed to userptr, reject any unknown
flags, better handle READONLY flag, improve permission check
v6: fix ptr cast warning, use set_page_dirty/mark_page_accessed on unpin
v7: add warning about it's availability in the API definition
v8: drop access_ok check, fix VM mapping bits
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com> (v4)
Reviewed-by: Jérôme Glisse <jglisse@redhat.com> (v4)
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Logical rings do not need most of the initialization their
legacy ringbuffer counterparts do: we just need the pipe
control object for the render ring, enable Execlists on the
hardware and a few workarounds.
v2: Squash with: "drm/i915: Extract pipe control fini & make
init outside accesible".
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
[danvet: Make checkpatch happy.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Allocate and populate the default LRC for every ring, call
gen-specific init/cleanup, init/fini the command parser and
set the status page (now inside the LRC object). These are
things all engines/rings have in common.
Stopping the ring before cleanup and initializing the seqnos
is left as a TODO task (we need more infrastructure in place
before we can achieve this).
v2: Check the ringbuffer backing obj for ring_is_initialized,
instead of the context backing obj (similar, but not exactly
the same).
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Execlists are indeed a brave new world with respect to workload
submission to the GPU.
In previous version of these series, I have tried to impact the
legacy ringbuffer submission path as little as possible (mostly,
passing the context around and using the correct ringbuffer when I
needed one) but Daniel is afraid (probably with a reason) that
these changes and, especially, future ones, will end up breaking
older gens.
This commit and some others coming next will try to limit the
damage by creating an alternative path for workload submission.
The first step is here: laying out a new ring init/fini.
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
As suggested by Daniel Vetter. The idea, in subsequent patches, is to
provide an alternative to these vfuncs for the Execlists submission
mechanism.
v2: Splitted into two and reordered to illustrate our intentions, instead
of showing it off. Also, remove the add_request vfunc and added the
stop_ring one.
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
[danvet:
- Make checkpatch happy.
- Be grumpy about the excessive vtable.
- Ditch gt->is_ring_initialized.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The backing objects and ringbuffers for contexts created via open
fd are actually empty until the user starts sending execbuffers to
them. At that point, we allocate & populate them. We do this because,
at create time, we really don't know which engine is going to be used
with the context later on (and we don't want to waste memory on
objects that we might never use).
v2: As contexts created via ioctl can only be used with the render
ring, we have enough information to allocate & populate them right
away.
v3: Defer the creation always, even with ioctl-created contexts, as
requested by Daniel Vetter.
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
For the most part, logical ring context objects are similar to hardware
contexts in that the backing object is meant to be opaque. There are
some exceptions where we need to poke certain offsets of the object for
initialization, updating the tail pointer or updating the PDPs.
For our basic execlist implementation we'll only need our PPGTT PDs,
and ringbuffer addresses in order to set up the context. With previous
patches, we have both, so start prepping the context to be load.
Before running a context for the first time you must populate some
fields in the context object. These fields begin 1 PAGE + LRCA, ie. the
first page (in 0 based counting) of the context image. These same
fields will be read and written to as contexts are saved and restored
once the system is up and running.
Many of these fields are completely reused from previous global
registers: ringbuffer head/tail/control, context control matches some
previous MI_SET_CONTEXT flags, and page directories. There are other
fields which we don't touch which we may want in the future.
v2: CTX_LRI_HEADER_0 is MI_LOAD_REGISTER_IMM(14) for render and (11)
for other engines.
v3: Several rebases and general changes to the code.
v4: Squash with "Extract LR context object populating"
Also, Damien's review comments:
- Set the Force Posted bit on the LRI header, as the BSpec suggest we do.
- Prevent warning when compiling a 32-bits kernel without HIGHMEM64.
- Add a clarifying comment to the context population code.
v5: Damien's review comments:
- The third MI_LOAD_REGISTER_IMM in the context does not set Force Posted.
- Remove dead code.
v6: Add a note about the (presumed) differences between BDW and CHV state
contexts. Also, Brad's review comments:
- Use the _MASKED_BIT_ENABLE, upper_32_bits and lower_32_bits macros.
- Be less magical about how we set the ring size in the context.
Signed-off-by: Ben Widawsky <ben@bwidawsk.net> (v1)
Signed-off-by: Rafael Barbalho <rafael.barbalho@intel.com> (v2)
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Any given ringbuffer is unequivocally tied to one context and one engine.
By setting the appropriate pointers to them, the ringbuffer struct holds
all the infromation you might need to submit a workload for processing,
Execlists style.
v2: Drop ring->ctx since that looks terribly ill-defined for legacy
ringbuffer submission.
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com> (v1)
Acked-by: Damien Lespiau <damien.lespiau@intel.com> (v2)
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
As we have said a couple of times by now, logical ring contexts have
their own ringbuffers: not only the backing pages, but the whole
management struct.
In a previous version of the series, this was achieved with two separate
patches:
drm/i915/bdw: Allocate ringbuffer backing objects for default global LRC
drm/i915/bdw: Allocate ringbuffer for user-created LRCs
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Now that we have the ability to allocate our own context backing objects
and we have multiplexed one of them per engine inside the context structs,
we can finally allocate and free them correctly.
Regarding the context size, reading the register to calculate the sizes
can work, I think, however the docs are very clear about the actual
context sizes on GEN8, so just hardcode that and use it.
v2: Rebased on top of the Full PPGTT series. It is important to notice
that at this point we have one global default context per engine, all
of them using the aliasing PPGTT (as opposed to the single global
default context we have with legacy HW contexts).
v3:
- Go back to one single global default context, this time with multiple
backing objects inside.
- Use different context sizes for non-render engines, as suggested by
Damien (still hardcoded, since the information about the context size
registers in the BSpec is, well, *lacking*).
- Render ctx size is 20 (or 19) pages, but not 21 (caught by Damien).
- Move default context backing object creation to intel_init_ring (so
that we don't waste memory in rings that might not get initialized).
v4:
- Reuse the HW legacy context init/fini.
- Create a separate free function.
- Rename the functions with an intel_ preffix.
v5: Several rebases to account for the changes in the previous patches.
Signed-off-by: Ben Widawsky <ben@bwidawsk.net> (v1)
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
A context backing object only makes sense for a given engine (because
it holds state data specific to that engine).
In legacy ringbuffer sumission mode, the only MI_SET_CONTEXT we really
perform is for the render engine, so one backing object is all we nee.
With Execlists, however, we need backing objects for every engine, as
contexts become the only way to submit workloads to the GPU. To tackle
this problem, we multiplex the context struct to contain <no-of-engines>
objects.
Originally, I colored this code by instantiating one new context for
every engine I wanted to use, but this change suggested by Brad Volkin
makes it more elegant.
v2: Leave the old backing object pointer behind. Daniel Vetter suggested
using a union, but it makes more sense to keep rcs_state as a NULL
pointer behind, to make sure no one uses it incorrectly when Execlists
are enabled, similar to what he suggested for ring->buffer (Rusty's API
level 5).
v3: Use the name "state" instead of the too-generic "obj", so that it
mirrors the name choice for the legacy rcs_state.
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
For the moment this is just a placeholder, but it shows one of the
main differences between the good ol' HW contexts and the shiny
new Logical Ring Contexts: LR contexts allocate and free their
own backing objects. Another difference is that the allocation is
deferred (as the create function name suggests), but that does not
happen in this patch yet, because for the moment we are only dealing
with the default context.
Early in the series we had our own gen8_gem_context_init/fini
functions, but the truth is they now look almost the same as the
legacy hw context init/fini functions. We can always split them
later if this ceases to be the case.
Also, we do not fall back to legacy ringbuffers when logical ring
context initialization fails (not very likely to happen and, even
if it does, hw contexts would probably fail as well).
v2: Daniel says "explain, do not showcase".
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
[danvet: s/BUG_ON/WARN_ON/.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Depending upon one module option to be sanitized (through USES_PPGTT)
for the other is a bit too fragile for my taste. At least WARN about
this.
Cc: Ben Widawsky <ben@bwidawsk.net>
Cc: Damien Lespiau <damien.lespiau@intel.com>
Cc: Oscar Mateo <oscar.mateo@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
GEN8 brings an expansion of the HW contexts: "Logical Ring Contexts".
These expanded contexts enable a number of new abilities, especially
"Execlists".
The macro is defined to off until we have things in place to hope to
work.
v2: Rename "advanced contexts" to the more correct "logical ring
contexts".
v3: Add a module parameter to enable execlists. Execlist are relatively
new, and so it'd be wise to be able to switch back to ring submission
to debug subtle problems that will inevitably arise.
v4: Add an intel_enable_execlists function.
v5: Sanitize early, as suggested by Daniel. Remove lrc_enabled.
Signed-off-by: Ben Widawsky <ben@bwidawsk.net> (v1)
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com> (v3)
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com> (v2, v4 & v5)
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Some legacy HW context code assumptions don't make sense for this new
submission method, so we will place this stuff in a separate file.
Note for reviewers: I've carefully considered the best name for this file
and this was my best option (other possibilities were intel_lr_context.c
or intel_execlist.c). I am open to a certain bikeshedding on this matter,
anyway.
And some point in time, it would be a good idea to split intel_lrc.c/.h
even further, but for the moment just shove everything together.
v2: Change to intel_lrc.c
v3: Squash together with the header file addition
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
As usual in both a crtc index and a struct drm_crtc * version.
The function assumes that no one drivers their display below 10Hz, and
it will complain if the vblank wait takes longer than that.
v2: Also check dev->max_vblank_counter since some drivers register a
fake get_vblank_counter function.
v3: Use drm_vblank_count instead of calling the low-level
->get_vblank_counter callback. That way we'll get the sw-cooked
counter for platforms without proper vblank support and so can ditch
the max_vblank_counter check again.
v4: Review from Michel Dänzer:
- Restore lost notes about v3:
- Spelling in kerneldoc.
- Inline wait_event condition.
- s/vblank_wait/wait_one_vblank/
Cc: Michel Dänzer <michel@daenzer.net>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Reviewed-by: Matt Roper <matthew.d.roper@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
In general having this can't hurt, and the atomic helpers will need
it to be able to reset the state objects properly. The overall idea
is to reset in the order pixels flow, so planes -> crtcs ->
encoders -> connectors.
v2: Squash in fixup from Ville to correctly deference struct drm_plane
instead of drm_crtc when walking the plane list. Fixes an oops in
driver init and resume.
Reviewed-by: Matt Roper <matthew.d.roper@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Even though we should not try to use 4+GiB GTTs on 32-bit systems, by
using a local variable we can future proof the code whilst making it
easier to read.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
[danvet: Appease checkpatch a bit.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Part of the pre-validation for an execbuffer call is that there is at
least one object in the execlist. As we bail if we fail to lookup any
object, we can be sure that after the eb_lookup_vma() there is at least
one object in the vma list and so we do not need to assert.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Ben Widawsky <benjamin.widawsky@intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We have an implementation requirement that precludes the user from
requesting a ggtt entry when the device is operating in ppgtt mode. Move
the current check from inside the execbuffer object collation to the
prevalidation phase.
v2: Roll both invalid flags checks into one
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Based upon a hunk from a patch from Chris Wilson, but augmented to:
- Process the batch in the full ppgtt vm so that self-relocations
match again with userspace's expectations..
- Add a comment why plain pin for the global gtt binding is safe at
that point.
v2: Drop local bind_vm variable (Chris).
v3: Explain why this works despite the lack of proper active tracking
for the ggtt batch vma.
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Ben Widawsky <benjamin.widawsky@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Adapt the macro so that we can pass either the struct drm_device or the
struct drm_i915_private pointers and get the answer we want. Over time,
my plan is to convert all users over to using drm_i915_private and so
trimming down the pointer dance. Having spent a few hours chasing that
goal and achieved over 8k of object code saving, it appears to be a
worthwhile target. This interim macro allows us to slowly convert over.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
[danvet: Drop the (struct drm_device *) cast per the m-l discussion.
Also explain the seemingly unecessary first cast.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
During ring initialisation, sometimes we observe, though not in
production hardware, that the idle flag is not set even though the ring
is empty. Double check before giving up.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Damien Lespiau <damien.lespiau@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This is so that we can make the drm_i915_private->info always the
preferred source for chipset type and feature queries.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This migrates the fence tracking onto the existing seqno
infrastructure so that the later conversion to tracking via requests is
simplified.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Move the decision on whether we need to have a mappable object during
execbuffer to the fore and then reuse that decision by propagating the
flag through to reservation. As a corollary, before doing the actual
relocation through the GTT, we can make sure that we do have a GTT
mapping through which to operate.
Note that the key to make this work is to ditch the
obj->map_and_fenceable unbind optimization - with full ppgtt it
doesn't make a lot of sense any more anyway.
v2: Revamp and resend to ease future patches.
v3: Refresh patch rationale
References: https://bugs.freedesktop.org/show_bug.cgi?id=81094
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Ben Widawsky <benjamin.widawsky@intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
[danvet: Explain why obj->map_and_fenceable is key and split out the
secure batch fix.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
If an object is not bound into the global GTT, then it cannot be
accessed via the GTT. This restores the original code that was muddled
by ppGTT. In the process, we remove a WARN that had long outlived its
usefulness and was simply being coded around instead.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
I keep telling myself that those tables aren't great because their size
is the number of dwords we need to program and not the number of entries
(number of dwords = number of entries * 2).
And... I got it wrong when I refactored the code. Fortunately, it was
only wrong when the VBT table (or the code parsing it) is itself
erroneous. Long story short, it shouldn't matter, but still, there's a
potential array overflow and random programming of the DDI translation
tables.
Cc: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Removing the check for HAS_PCH_SPLIT, it looks redundant here. Anyways all the
platforms are checked separately.
v2: Reordering as per the gen (Ville)
Signed-off-by: Sonika Jindal <sonika.jindal@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Pull nouveau drm updates from Ben Skeggs:
"Apologies for not getting this done in time for Dave's drm-next merge
window. As he mentioned, a pre-existing bug reared its head a lot
more obviously after this lot of changes. It took quite a bit of time
to track it down. In any case, Dave suggested I try my luck by
sending directly to you this time.
Overview:
- more code for Tegra GK20A from NVIDIA - probing, reclockig
- better fix for Kepler GPUs that have the graphics engine powered
off on startup, method courtesy of info provided by NVIDIA
- unhardcoding of a bunch of graphics engine setup on
Fermi/Kepler/Maxwell, will hopefully solve some issues people have
noticed on higher-end models
- support for "Zero Bandwidth Clear" on Fermi/Kepler/Maxwell, needs
userspace support in general, but some lucky apps will benefit
automagically
- reviewed/exposed the full object APIs to userspace (finally), gives
it access to perfctrs, ZBC controls, various events. More to come
in the future.
- various other fixes"
Acked-by: Dave Airlie <airlied@redhat.com>
* 'linux-3.17' of git://anongit.freedesktop.org/git/nouveau/linux-2.6: (87 commits)
drm/nouveau: expose the full object/event interfaces to userspace
drm/nouveau: fix headless mode
drm/nouveau: hide sysfs pstate file behind an option again
drm/nv50/disp: shhh compiler
drm/gf100-/gr: implement the proper SetShaderExceptions method
drm/gf100-/gr: remove some broken ltc bashing, for now
drm/gf100-/gr: unhardcode attribute cb config
drm/gf100-/gr: fetch tpcs-per-ppc info on startup
drm/gf100-/gr: unhardcode pagepool config
drm/gf100-/gr: unhardcode bundle cb config
drm/gf100-/gr: improve initial context patch list helpers
drm/gf100-/gr: add support for zero bandwidth clear
drm/nouveau/ltc: add zbc drivers
drm/nouveau/ltc: s/ltcg/ltc/ + cleanup
drm/nouveau: use ram info from nvif_device
drm/nouveau/disp: implement nvif event sources for vblank/connector notifiers
drm/nouveau/disp: allow user direct access to channel control registers
drm/nouveau/disp: audit and version display classes
drm/nouveau/disp: audit and version SCANOUTPOS method
drm/nv50-/disp: audit and version PIOR_PWR method
...
No-one has yet had time to move this to debugfs as discussed during
the last merge window. Until this happens, hide the option to make
it clear it's not going to be here forever.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
We have another version of it implemented in SW, however, that version
isn't serialised with normal PGRAPH operation and can possibly clobber
the enables for another context.
This is the same method that's implemented by the NVIDIA binary driver.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
... and hope that the defaults are good enough. This was always
supposed to be a read/modify/write thing anyway, so we're writing
very wrong stuff for some boards already.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Should be the same values as before, except:
GF117 has smaller buffer allocated, as per register setup.
GK20A now uses values from Tegra driver, not GK104's.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Removes need for fixed buffer indices, and allows the functions
utilising them to also be run outside of context generation.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Default ZBC table is compatible with binary driver defaults.
Userspace will need to be updated to take full advantage of this
feature, however, some applications will see a performance boost
without updated drivers.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
The full object interfaces are about to be exposed to userspace, so we
need to check for any security-related issues and version the structs
to make it easier to handle any changes we may need in the future.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
The full object interfaces are about to be exposed to userspace, so we
need to check for any security-related issues and version the structs
to make it easier to handle any changes we may need in the future.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
The full object interfaces are about to be exposed to userspace, so we
need to check for any security-related issues and version the structs
to make it easier to handle any changes we may need in the future.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
The full object interfaces are about to be exposed to userspace, so we
need to check for any security-related issues and version the structs
to make it easier to handle any changes we may need in the future.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
The full object interfaces are about to be exposed to userspace, so we
need to check for any security-related issues and version the structs
to make it easier to handle any changes we may need in the future.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
The full object interfaces are about to be exposed to userspace, so we
need to check for any security-related issues and version the structs
to make it easier to handle any changes we may need in the future.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
The full object interfaces are about to be exposed to userspace, so we
need to check for any security-related issues and version the structs
to make it easier to handle any changes we may need in the future.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
The full object interfaces are about to be exposed to userspace, so we
need to check for any security-related issues and version the structs
to make it easier to handle any changes we may need in the future.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
The full object interfaces are about to be exposed to userspace, so we
need to check for any security-related issues and version the structs
to make it easier to handle any changes we may need in the future.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
The full object interfaces are about to be exposed to userspace, so we
need to check for any security-related issues and version the structs
to make it easier to handle any changes we may need in the future.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
One of the next commits will remove some of the class IDs, leaving only
the ones used by NVIDIA which, presumably, mark where functionality
changes actually happened.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
The indirect method has been left in-place here as a fallback path, as
it may not be possible to map the non-PAGE_SIZE aligned control areas
across some chipset+interface combinations.
This isn't a problem for the primary use-case where the core and drm
are linked together in kernel-land, but across a VM or (in the case
where it applies now) between the core in the kernel and a userspace
test tool.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
The full object interfaces are about to be exposed to userspace, so we
need to check for any security-related issues and version the structs
to make it easier to handle any changes we may need in the future.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
The full object interfaces are about to be exposed to userspace, so we
need to check for any security-related issues and version the structs
to make it easier to handle any changes we may need in the future.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
The full object interfaces are about to be exposed to userspace, so we
need to check for any security-related issues and version the structs
to make it easier to handle any changes we may need in the future.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
The full object interfaces are about to be exposed to userspace, so we
need to check for any security-related issues and version the structs
to make it easier to handle any changes we may need in the future.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
The full object interfaces are about to be exposed to userspace, so we
need to check for any security-related issues and version the structs
to make it easier to handle any changes we may need in the future.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
This is a wrapper around the interfaces defined in an earlier commit,
and is also used by various userspace (either by a libdrm backend, or
libpciaccess) tools/tests.
In the future this will be extended to handle channels, replacing some
long-unloved code we currently use, and allow fifo/display/mpeg (hi
Ilia ;)) engines to all be exposed in the same way.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
This forms the basis for the new APIs that will be exposed to userspace,
giving it access to:
- Object method calls, the immediately useful of which is performance
counters and the abiity to manipulate the ZBC tables.
- Information on the child classes an object supports, in order to avoid
having to try all supported classes until successful.
- Notifications, which will be used in the future to inform the client
if its channel was killed due to a lockup, etc.
This commit imports the interfaces, but are not currently used. The DRM
portion of the driver will be ported to speak to the core using these
interfaces as much as possible.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
This is a lot of prep-work for being able to send event notifications
back to userspace. Events now contain data, rather than a "something
just happened" signal.
Handler data is now embedded into a containing structure, rather than
being kmalloc()'d, and can optionally have the notify routine handled
in a workqueue.
Various races between suspend/unload with display HPD/DP IRQ handlers
automagically solved as a result.
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Linux 3.16 fixed multiple bugs in kms pageflip completion events
and timestamping, which were originally introduced in Linux 3.13.
These fixes have been backported to all stable kernels since 3.13.
However, the userspace nouveau-ddx needs to be aware if it is
running on a kernel on which these bugs are fixed, or not.
Bump the patchlevel of the drm driver version to signal this,
so backporting this patch to stable 3.13+ kernels will give the
ddx the required info.
Signed-off-by: Mario Kleiner <mario.kleiner.de@gmail.com>
Cc: <stable@vger.kernel.org> #v3.13+
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>