Signed-off-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Arindam Nath <arindam.nath@amd.com>
Reviewed-by: Leo Liu <leo.liu@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
When the radeon driver resets a gpu, it attempts to test whether all the
rings can successfully handle an IB. If these rings fail to respond, the
process will wait forever. Another gpu reset can't happen at this point,
as the current reset holds a lock required to do so. Instead, make all
the IB tests run with a timeout, so the system can attempt to recover
in this case.
While this doesn't fix the underlying issue with card resets failing, it
gives the system a higher chance of recovering. These timeouts have been
confirmed to help both a Tathi and Hawaii card recover after a gpu reset.
This also adds a new function, radeon_fence_wait_timeout, that behaves like
fence_wait_timeout. It is used instead of fence_wait_timeout as it continues
to work during a reset. radeon_fence_wait is changed to be implemented
using this function.
V2:
- Changed the timeout to 1s, as the default 10s from radeon_wait_timeout was
too long. A timeout of 100ms was tested and found to be too short.
- Changed radeon_fence_wait_timeout to behave more like fence_wait_timeout.
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Matthew Dawson <matthew@mjdsystems.ca>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Hardware doesn't seem to work correctly, just block userspace in this case.
v2: add missing defines
Bugs: https://bugs.freedesktop.org/show_bug.cgi?id=85320
Signed-off-by: Christian König <christian.koenig@amd.com>
CC: stable@vger.kernel.org
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
This fixes "UVD not responding, trying to reset the VCPU"
messages on earlier ASICs.
Signed-off-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Only the essentials, cause this hw generation is really buggy.
v2: start supporting RV670,RV620 and RV635 as well
v3: activate more workarounds
Signed-off-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
v2: cleanup R600 support
v3: rebased on current drm-fixes-3.12
v4: rebased on drm-next-3.14
Signed-off-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
It isn't necessary for command streams generated by the kernel (at least
not while we aren't storing ring or indirect buffers in VRAM).
Signed-off-by: Michel Dänzer <michel.daenzer@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Some RV7xx generation hardware crashes after you
raise the UVD clocks for the first time. Try to
avoid this by using the lower clocks to boot these.
Workaround for: https://bugzilla.kernel.org/show_bug.cgi?id=71891
v2: lower clocks on IB test as well
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Cc: stable@vger.kernel.org
In all cases where it really matters we are using the read functions anyway.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
To workaround bugs and/or certain limits it's sometimes
useful to fall back to waiting on fences.
Signed-off-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: stable@vger.kernel.org
This only seem to work for H.264 but not for VC-1 streams.
Need to investigate further why exactly.
This reverts commit 4b40e59212.
Signed-off-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Starting with UVD3 message and feedback buffers have their
own 256MB segment, so no need to force them into VRAM any more.
Signed-off-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Alex writes:
This is the radeon drm-next request. Big changes include:
- support for dpm on CIK parts
- support for ASPM on CIK parts
- support for berlin GPUs
- major ring handling cleanup
- remove the old 3D blit code for bo moves in favor of CP DMA or sDMA
- lots of bug fixes
[airlied: fix up a bunch of conflicts from drm_order removal]
* 'drm-next-3.12' of git://people.freedesktop.org/~agd5f/linux: (898 commits)
drm/radeon/dpm: make sure dc performance level limits are valid (CI)
drm/radeon/dpm: make sure dc performance level limits are valid (BTC-SI) (v2)
drm/radeon: gcc fixes for extended dpm tables
drm/radeon: gcc fixes for kb/kv dpm
drm/radeon: gcc fixes for ci dpm
drm/radeon: gcc fixes for si dpm
drm/radeon: gcc fixes for ni dpm
drm/radeon: gcc fixes for trinity dpm
drm/radeon: gcc fixes for sumo dpm
drm/radeonn: gcc fixes for rv7xx/eg/btc dpm
drm/radeon: gcc fixes for rv6xx dpm
drm/radeon: gcc fixes for radeon_atombios.c
drm/radeon: enable UVD interrupts on CIK
drm/radeon: fix init ordering for r600+
drm/radeon/dpm: only need to reprogram uvd if uvd pg is enabled
drm/radeon: check the return value of uvd_v1_0_start in uvd_v1_0_init
drm/radeon: split out radeon_uvd_resume from uvd_v4_2_resume
radeon kms: fix uninitialised hotplug work usage in r100_irq_process()
drm/radeon/audio: set up the sads on DCE3.2 asics
drm/radeon: fix handling of variable sized arrays for router objects
...
Conflicts:
drivers/gpu/drm/i915/i915_dma.c
drivers/gpu/drm/i915/i915_gem_dmabuf.c
drivers/gpu/drm/i915/intel_pm.c
drivers/gpu/drm/radeon/cik.c
drivers/gpu/drm/radeon/ni.c
drivers/gpu/drm/radeon/r600.c
No need to try the ring tests if starting the UVD block failed.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Our different hardware blocks are actually completely
separated, so it doesn't make much sense any more to
structure the code by pure chipset generations.
Start restructuring the code by separating our the UVD block.
v2: updated commit message
v3: rebased and restructurized start/stop functions for kv dpm.
Signed-off-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>