2009-06-05 16:42:42 +04:00
|
|
|
/*
|
|
|
|
* Copyright 2008 Advanced Micro Devices, Inc.
|
|
|
|
* Copyright 2008 Red Hat Inc.
|
|
|
|
* Copyright 2009 Jerome Glisse.
|
|
|
|
*
|
|
|
|
* Permission is hereby granted, free of charge, to any person obtaining a
|
|
|
|
* copy of this software and associated documentation files (the "Software"),
|
|
|
|
* to deal in the Software without restriction, including without limitation
|
|
|
|
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
|
|
|
|
* and/or sell copies of the Software, and to permit persons to whom the
|
|
|
|
* Software is furnished to do so, subject to the following conditions:
|
|
|
|
*
|
|
|
|
* The above copyright notice and this permission notice shall be included in
|
|
|
|
* all copies or substantial portions of the Software.
|
|
|
|
*
|
|
|
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
|
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
|
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
|
|
|
* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
|
|
|
|
* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
|
|
|
|
* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
|
|
|
|
* OTHER DEALINGS IN THE SOFTWARE.
|
|
|
|
*
|
|
|
|
* Authors: Dave Airlie
|
|
|
|
* Alex Deucher
|
|
|
|
* Jerome Glisse
|
|
|
|
*/
|
|
|
|
#include <linux/seq_file.h>
|
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.
percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.
http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.
* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.
* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.
The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.
2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.
3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.
4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.
5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.
6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).
* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig
8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.
Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.
Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
2010-03-24 11:04:11 +03:00
|
|
|
#include <linux/slab.h>
|
2010-03-09 17:45:10 +03:00
|
|
|
#include <drm/drmP.h>
|
|
|
|
#include <drm/drm.h>
|
|
|
|
#include <drm/drm_crtc_helper.h>
|
2009-06-05 16:42:42 +04:00
|
|
|
#include "radeon_reg.h"
|
|
|
|
#include "radeon.h"
|
2010-03-12 00:19:17 +03:00
|
|
|
#include "radeon_asic.h"
|
2012-10-02 21:01:07 +04:00
|
|
|
#include <drm/radeon_drm.h>
|
2009-09-01 09:25:57 +04:00
|
|
|
#include "r100_track.h"
|
2009-09-08 04:10:24 +04:00
|
|
|
#include "r300d.h"
|
2009-10-01 12:20:52 +04:00
|
|
|
#include "rv350d.h"
|
2009-08-21 07:21:01 +04:00
|
|
|
#include "r300_reg_safe.h"
|
|
|
|
|
2010-01-07 14:39:21 +03:00
|
|
|
/* This files gather functions specifics to: r300,r350,rv350,rv370,rv380
|
|
|
|
*
|
|
|
|
* GPU Errata:
|
|
|
|
* - HOST_PATH_CNTL: r300 family seems to dislike write to HOST_PATH_CNTL
|
|
|
|
* using MMIO to flush host path read cache, this lead to HARDLOCKUP.
|
|
|
|
* However, scheduling such write to the ring seems harmless, i suspect
|
|
|
|
* the CP read collide with the flush somehow, or maybe the MC, hard to
|
|
|
|
* tell. (Jerome Glisse)
|
|
|
|
*/
|
2009-06-05 16:42:42 +04:00
|
|
|
|
radeon: Deinline indirect register accessor functions
This patch deinlines indirect register accessor functions.
These functions perform two mmio accesses, framed by spin lock/unlock.
Spin lock/unlock by itself takes more than 50 cycles in ideal case
(if lock is exclusively cached on current CPU).
With this .config: http://busybox.net/~vda/kernel_config,
after uninlining these functions have sizes and callsite counts
as follows:
r600_uvd_ctx_rreg: 111 bytes, 4 callsites
r600_uvd_ctx_wreg: 113 bytes, 5 callsites
eg_pif_phy0_rreg: 106 bytes, 13 callsites
eg_pif_phy0_wreg: 108 bytes, 13 callsites
eg_pif_phy1_rreg: 107 bytes, 13 callsites
eg_pif_phy1_wreg: 108 bytes, 13 callsites
rv370_pcie_rreg: 111 bytes, 21 callsites
rv370_pcie_wreg: 113 bytes, 24 callsites
r600_rcu_rreg: 111 bytes, 16 callsites
r600_rcu_wreg: 113 bytes, 25 callsites
cik_didt_rreg: 106 bytes, 10 callsites
cik_didt_wreg: 107 bytes, 10 callsites
tn_smc_rreg: 106 bytes, 126 callsites
tn_smc_wreg: 107 bytes, 116 callsites
eg_cg_rreg: 107 bytes, 20 callsites
eg_cg_wreg: 108 bytes, 52 callsites
Functions r100_mm_rreg() and r100_mm_rreg() have a fast path and
a locked (slow) path. This patch deinlines only slow path.
r100_mm_rreg_slow: 78 bytes, 2083 callsites
r100_mm_wreg_slow: 81 bytes, 3570 callsites
Reduction in code size is more than 65,000 bytes:
text data bss dec hex filename
85740176 22294680 20627456 128662312 7ab3b28 vmlinux.before
85674192 22294776 20627456 128598664 7aa4288 vmlinux
Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Christian König <christian.koenig@amd.com>
Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2015-05-20 14:02:37 +03:00
|
|
|
/*
|
|
|
|
* Indirect registers accessor
|
|
|
|
*/
|
|
|
|
uint32_t rv370_pcie_rreg(struct radeon_device *rdev, uint32_t reg)
|
|
|
|
{
|
|
|
|
unsigned long flags;
|
|
|
|
uint32_t r;
|
|
|
|
|
|
|
|
spin_lock_irqsave(&rdev->pcie_idx_lock, flags);
|
|
|
|
WREG32(RADEON_PCIE_INDEX, ((reg) & rdev->pcie_reg_mask));
|
|
|
|
r = RREG32(RADEON_PCIE_DATA);
|
|
|
|
spin_unlock_irqrestore(&rdev->pcie_idx_lock, flags);
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
|
|
|
|
void rv370_pcie_wreg(struct radeon_device *rdev, uint32_t reg, uint32_t v)
|
|
|
|
{
|
|
|
|
unsigned long flags;
|
|
|
|
|
|
|
|
spin_lock_irqsave(&rdev->pcie_idx_lock, flags);
|
|
|
|
WREG32(RADEON_PCIE_INDEX, ((reg) & rdev->pcie_reg_mask));
|
|
|
|
WREG32(RADEON_PCIE_DATA, (v));
|
|
|
|
spin_unlock_irqrestore(&rdev->pcie_idx_lock, flags);
|
|
|
|
}
|
|
|
|
|
2009-06-05 16:42:42 +04:00
|
|
|
/*
|
|
|
|
* rv370,rv380 PCIE GART
|
|
|
|
*/
|
2009-09-30 17:35:32 +04:00
|
|
|
static int rv370_debugfs_pcie_gart_info_init(struct radeon_device *rdev);
|
|
|
|
|
2009-06-05 16:42:42 +04:00
|
|
|
void rv370_pcie_gart_tlb_flush(struct radeon_device *rdev)
|
|
|
|
{
|
|
|
|
uint32_t tmp;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
/* Workaround HW bug do flush 2 times */
|
|
|
|
for (i = 0; i < 2; i++) {
|
|
|
|
tmp = RREG32_PCIE(RADEON_PCIE_TX_GART_CNTL);
|
|
|
|
WREG32_PCIE(RADEON_PCIE_TX_GART_CNTL, tmp | RADEON_PCIE_TX_GART_INVALIDATE_TLB);
|
|
|
|
(void)RREG32_PCIE(RADEON_PCIE_TX_GART_CNTL);
|
|
|
|
WREG32_PCIE(RADEON_PCIE_TX_GART_CNTL, tmp);
|
|
|
|
}
|
2009-08-12 12:43:14 +04:00
|
|
|
mb();
|
2009-06-05 16:42:42 +04:00
|
|
|
}
|
|
|
|
|
2014-07-17 14:01:07 +04:00
|
|
|
#define R300_PTE_UNSNOOPED (1 << 0)
|
2011-01-25 07:24:59 +03:00
|
|
|
#define R300_PTE_WRITEABLE (1 << 2)
|
|
|
|
#define R300_PTE_READABLE (1 << 3)
|
|
|
|
|
2015-01-21 11:36:35 +03:00
|
|
|
uint64_t rv370_pcie_gart_get_page_entry(uint64_t addr, uint32_t flags)
|
2009-09-14 20:29:49 +04:00
|
|
|
{
|
|
|
|
addr = (lower_32_bits(addr) >> 8) |
|
2014-07-17 14:01:07 +04:00
|
|
|
((upper_32_bits(addr) & 0xff) << 24);
|
|
|
|
if (flags & RADEON_GART_PAGE_READ)
|
|
|
|
addr |= R300_PTE_READABLE;
|
|
|
|
if (flags & RADEON_GART_PAGE_WRITE)
|
|
|
|
addr |= R300_PTE_WRITEABLE;
|
|
|
|
if (!(flags & RADEON_GART_PAGE_SNOOP))
|
|
|
|
addr |= R300_PTE_UNSNOOPED;
|
2015-01-21 11:36:35 +03:00
|
|
|
return addr;
|
|
|
|
}
|
|
|
|
|
|
|
|
void rv370_pcie_gart_set_page(struct radeon_device *rdev, unsigned i,
|
|
|
|
uint64_t entry)
|
|
|
|
{
|
|
|
|
void __iomem *ptr = rdev->gart.ptr;
|
|
|
|
|
2009-09-14 20:29:49 +04:00
|
|
|
/* on x86 we want this to be CPU endian, on powerpc
|
|
|
|
* on powerpc without HW swappers, it'll get swapped on way
|
|
|
|
* into VRAM - so no need for cpu_to_le32 on VRAM tables */
|
2015-01-21 11:36:35 +03:00
|
|
|
writel(entry, ((void __iomem *)ptr) + (i * 4));
|
2009-09-14 20:29:49 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
int rv370_pcie_gart_init(struct radeon_device *rdev)
|
2009-06-05 16:42:42 +04:00
|
|
|
{
|
|
|
|
int r;
|
|
|
|
|
2011-11-03 19:16:49 +04:00
|
|
|
if (rdev->gart.robj) {
|
2010-10-31 01:08:30 +04:00
|
|
|
WARN(1, "RV370 PCIE GART already initialized\n");
|
2009-09-14 20:29:49 +04:00
|
|
|
return 0;
|
|
|
|
}
|
2009-06-05 16:42:42 +04:00
|
|
|
/* Initialize common gart structure */
|
|
|
|
r = radeon_gart_init(rdev);
|
2009-09-14 20:29:49 +04:00
|
|
|
if (r)
|
2009-06-05 16:42:42 +04:00
|
|
|
return r;
|
|
|
|
r = rv370_debugfs_pcie_gart_info_init(rdev);
|
2009-09-14 20:29:49 +04:00
|
|
|
if (r)
|
2009-06-05 16:42:42 +04:00
|
|
|
DRM_ERROR("Failed to register debugfs file for PCIE gart !\n");
|
|
|
|
rdev->gart.table_size = rdev->gart.num_gpu_pages * 4;
|
2012-02-24 02:53:46 +04:00
|
|
|
rdev->asic->gart.tlb_flush = &rv370_pcie_gart_tlb_flush;
|
2015-01-21 11:36:35 +03:00
|
|
|
rdev->asic->gart.get_page_entry = &rv370_pcie_gart_get_page_entry;
|
2012-02-24 02:53:46 +04:00
|
|
|
rdev->asic->gart.set_page = &rv370_pcie_gart_set_page;
|
2009-09-14 20:29:49 +04:00
|
|
|
return radeon_gart_table_vram_alloc(rdev);
|
|
|
|
}
|
|
|
|
|
|
|
|
int rv370_pcie_gart_enable(struct radeon_device *rdev)
|
|
|
|
{
|
|
|
|
uint32_t table_addr;
|
|
|
|
uint32_t tmp;
|
|
|
|
int r;
|
|
|
|
|
2011-11-03 19:16:49 +04:00
|
|
|
if (rdev->gart.robj == NULL) {
|
2009-09-14 20:29:49 +04:00
|
|
|
dev_err(rdev->dev, "No VRAM object for PCIE GART.\n");
|
|
|
|
return -EINVAL;
|
2009-06-05 16:42:42 +04:00
|
|
|
}
|
2009-09-14 20:29:49 +04:00
|
|
|
r = radeon_gart_table_vram_pin(rdev);
|
|
|
|
if (r)
|
|
|
|
return r;
|
2009-06-05 16:42:42 +04:00
|
|
|
/* discard memory request outside of configured range */
|
|
|
|
tmp = RADEON_PCIE_TX_GART_UNMAPPED_ACCESS_DISCARD;
|
|
|
|
WREG32_PCIE(RADEON_PCIE_TX_GART_CNTL, tmp);
|
drm/radeon/kms: simplify memory controller setup V2
Get rid of _location and use _start/_end also simplify the
computation of vram_start|end & gtt_start|end. For R1XX-R2XX
we place VRAM at the same address of PCI aperture, those GPU
shouldn't have much memory and seems to behave better when
setup that way. For R3XX and newer we place VRAM at 0. For
R6XX-R7XX AGP we place VRAM before or after AGP aperture this
might limit to limit the VRAM size but it's very unlikely.
For IGP we don't change the VRAM placement.
Tested on (compiz,quake3,suspend/resume):
PCI/PCIE:RV280,R420,RV515,RV570,RV610,RV710
AGP:RV100,RV280,R420,RV350,RV620(RPB*),RV730
IGP:RS480(RPB*),RS690,RS780(RPB*),RS880
RPB: resume previously broken
V2 correct commit message to reflect more accurately the bug
and move VRAM placement to 0 for most of the GPU to avoid
limiting VRAM.
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2010-02-18 00:54:29 +03:00
|
|
|
WREG32_PCIE(RADEON_PCIE_TX_GART_START_LO, rdev->mc.gtt_start);
|
|
|
|
tmp = rdev->mc.gtt_end & ~RADEON_GPU_PAGE_MASK;
|
2009-06-05 16:42:42 +04:00
|
|
|
WREG32_PCIE(RADEON_PCIE_TX_GART_END_LO, tmp);
|
|
|
|
WREG32_PCIE(RADEON_PCIE_TX_GART_START_HI, 0);
|
|
|
|
WREG32_PCIE(RADEON_PCIE_TX_GART_END_HI, 0);
|
|
|
|
table_addr = rdev->gart.table_addr;
|
|
|
|
WREG32_PCIE(RADEON_PCIE_TX_GART_BASE, table_addr);
|
|
|
|
/* FIXME: setup default page */
|
drm/radeon/kms: simplify memory controller setup V2
Get rid of _location and use _start/_end also simplify the
computation of vram_start|end & gtt_start|end. For R1XX-R2XX
we place VRAM at the same address of PCI aperture, those GPU
shouldn't have much memory and seems to behave better when
setup that way. For R3XX and newer we place VRAM at 0. For
R6XX-R7XX AGP we place VRAM before or after AGP aperture this
might limit to limit the VRAM size but it's very unlikely.
For IGP we don't change the VRAM placement.
Tested on (compiz,quake3,suspend/resume):
PCI/PCIE:RV280,R420,RV515,RV570,RV610,RV710
AGP:RV100,RV280,R420,RV350,RV620(RPB*),RV730
IGP:RS480(RPB*),RS690,RS780(RPB*),RS880
RPB: resume previously broken
V2 correct commit message to reflect more accurately the bug
and move VRAM placement to 0 for most of the GPU to avoid
limiting VRAM.
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2010-02-18 00:54:29 +03:00
|
|
|
WREG32_PCIE(RADEON_PCIE_TX_DISCARD_RD_ADDR_LO, rdev->mc.vram_start);
|
2009-06-05 16:42:42 +04:00
|
|
|
WREG32_PCIE(RADEON_PCIE_TX_DISCARD_RD_ADDR_HI, 0);
|
|
|
|
/* Clear error */
|
2011-01-25 07:24:59 +03:00
|
|
|
WREG32_PCIE(RADEON_PCIE_TX_GART_ERROR, 0);
|
2009-06-05 16:42:42 +04:00
|
|
|
tmp = RREG32_PCIE(RADEON_PCIE_TX_GART_CNTL);
|
|
|
|
tmp |= RADEON_PCIE_TX_GART_EN;
|
|
|
|
tmp |= RADEON_PCIE_TX_GART_UNMAPPED_ACCESS_DISCARD;
|
|
|
|
WREG32_PCIE(RADEON_PCIE_TX_GART_CNTL, tmp);
|
|
|
|
rv370_pcie_gart_tlb_flush(rdev);
|
2011-09-01 01:54:07 +04:00
|
|
|
DRM_INFO("PCIE GART of %uM enabled (table at 0x%016llX).\n",
|
|
|
|
(unsigned)(rdev->mc.gtt_size >> 20),
|
|
|
|
(unsigned long long)table_addr);
|
2009-06-05 16:42:42 +04:00
|
|
|
rdev->gart.ready = true;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
void rv370_pcie_gart_disable(struct radeon_device *rdev)
|
|
|
|
{
|
2009-11-20 16:29:23 +03:00
|
|
|
u32 tmp;
|
2009-06-05 16:42:42 +04:00
|
|
|
|
2010-03-09 17:45:12 +03:00
|
|
|
WREG32_PCIE(RADEON_PCIE_TX_GART_START_LO, 0);
|
|
|
|
WREG32_PCIE(RADEON_PCIE_TX_GART_END_LO, 0);
|
|
|
|
WREG32_PCIE(RADEON_PCIE_TX_GART_START_HI, 0);
|
|
|
|
WREG32_PCIE(RADEON_PCIE_TX_GART_END_HI, 0);
|
2009-06-05 16:42:42 +04:00
|
|
|
tmp = RREG32_PCIE(RADEON_PCIE_TX_GART_CNTL);
|
|
|
|
tmp |= RADEON_PCIE_TX_GART_UNMAPPED_ACCESS_DISCARD;
|
|
|
|
WREG32_PCIE(RADEON_PCIE_TX_GART_CNTL, tmp & ~RADEON_PCIE_TX_GART_EN);
|
2011-11-03 19:16:49 +04:00
|
|
|
radeon_gart_table_vram_unpin(rdev);
|
2009-06-05 16:42:42 +04:00
|
|
|
}
|
|
|
|
|
2009-09-14 20:29:49 +04:00
|
|
|
void rv370_pcie_gart_fini(struct radeon_device *rdev)
|
2009-06-05 16:42:42 +04:00
|
|
|
{
|
2010-03-17 17:44:29 +03:00
|
|
|
radeon_gart_fini(rdev);
|
2009-09-14 20:29:49 +04:00
|
|
|
rv370_pcie_gart_disable(rdev);
|
|
|
|
radeon_gart_table_vram_free(rdev);
|
2009-06-05 16:42:42 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
void r300_fence_ring_emit(struct radeon_device *rdev,
|
|
|
|
struct radeon_fence *fence)
|
|
|
|
{
|
2011-10-23 14:56:27 +04:00
|
|
|
struct radeon_ring *ring = &rdev->ring[fence->ring];
|
2011-09-23 17:11:23 +04:00
|
|
|
|
2009-06-05 16:42:42 +04:00
|
|
|
/* Who ever call radeon_fence_emit should call ring_lock and ask
|
|
|
|
* for enough space (today caller are ib schedule and buffer move) */
|
|
|
|
/* Write SC register so SC & US assert idle */
|
2011-10-23 14:56:27 +04:00
|
|
|
radeon_ring_write(ring, PACKET0(R300_RE_SCISSORS_TL, 0));
|
|
|
|
radeon_ring_write(ring, 0);
|
|
|
|
radeon_ring_write(ring, PACKET0(R300_RE_SCISSORS_BR, 0));
|
|
|
|
radeon_ring_write(ring, 0);
|
2009-06-05 16:42:42 +04:00
|
|
|
/* Flush 3D cache */
|
2011-10-23 14:56:27 +04:00
|
|
|
radeon_ring_write(ring, PACKET0(R300_RB3D_DSTCACHE_CTLSTAT, 0));
|
|
|
|
radeon_ring_write(ring, R300_RB3D_DC_FLUSH);
|
|
|
|
radeon_ring_write(ring, PACKET0(R300_RB3D_ZCACHE_CTLSTAT, 0));
|
|
|
|
radeon_ring_write(ring, R300_ZC_FLUSH);
|
2009-06-05 16:42:42 +04:00
|
|
|
/* Wait until IDLE & CLEAN */
|
2011-10-23 14:56:27 +04:00
|
|
|
radeon_ring_write(ring, PACKET0(RADEON_WAIT_UNTIL, 0));
|
|
|
|
radeon_ring_write(ring, (RADEON_WAIT_3D_IDLECLEAN |
|
2010-02-05 09:58:28 +03:00
|
|
|
RADEON_WAIT_2D_IDLECLEAN |
|
|
|
|
RADEON_WAIT_DMA_GUI_IDLE));
|
2011-10-23 14:56:27 +04:00
|
|
|
radeon_ring_write(ring, PACKET0(RADEON_HOST_PATH_CNTL, 0));
|
|
|
|
radeon_ring_write(ring, rdev->config.r300.hdp_cntl |
|
2010-01-07 14:39:21 +03:00
|
|
|
RADEON_HDP_READ_BUFFER_INVALIDATE);
|
2011-10-23 14:56:27 +04:00
|
|
|
radeon_ring_write(ring, PACKET0(RADEON_HOST_PATH_CNTL, 0));
|
|
|
|
radeon_ring_write(ring, rdev->config.r300.hdp_cntl);
|
2009-06-05 16:42:42 +04:00
|
|
|
/* Emit fence sequence & fire IRQ */
|
2011-10-23 14:56:27 +04:00
|
|
|
radeon_ring_write(ring, PACKET0(rdev->fence_drv[fence->ring].scratch_reg, 0));
|
|
|
|
radeon_ring_write(ring, fence->seq);
|
|
|
|
radeon_ring_write(ring, PACKET0(RADEON_GEN_INT_STATUS, 0));
|
|
|
|
radeon_ring_write(ring, RADEON_SW_INT_FIRE);
|
2009-06-05 16:42:42 +04:00
|
|
|
}
|
|
|
|
|
2012-02-24 02:53:45 +04:00
|
|
|
void r300_ring_start(struct radeon_device *rdev, struct radeon_ring *ring)
|
2009-06-05 16:42:42 +04:00
|
|
|
{
|
|
|
|
unsigned gb_tile_config;
|
|
|
|
int r;
|
|
|
|
|
|
|
|
/* Sub pixel 1/12 so we can have 4K rendering according to doc */
|
|
|
|
gb_tile_config = (R300_ENABLE_TILING | R300_TILE_SIZE_16);
|
2009-06-17 15:28:30 +04:00
|
|
|
switch(rdev->num_gb_pipes) {
|
2009-06-05 16:42:42 +04:00
|
|
|
case 2:
|
|
|
|
gb_tile_config |= R300_PIPE_COUNT_R300;
|
|
|
|
break;
|
|
|
|
case 3:
|
|
|
|
gb_tile_config |= R300_PIPE_COUNT_R420_3P;
|
|
|
|
break;
|
|
|
|
case 4:
|
|
|
|
gb_tile_config |= R300_PIPE_COUNT_R420;
|
|
|
|
break;
|
|
|
|
case 1:
|
|
|
|
default:
|
|
|
|
gb_tile_config |= R300_PIPE_COUNT_RV350;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
2011-10-23 14:56:27 +04:00
|
|
|
r = radeon_ring_lock(rdev, ring, 64);
|
2009-06-05 16:42:42 +04:00
|
|
|
if (r) {
|
|
|
|
return;
|
|
|
|
}
|
2011-10-23 14:56:27 +04:00
|
|
|
radeon_ring_write(ring, PACKET0(RADEON_ISYNC_CNTL, 0));
|
|
|
|
radeon_ring_write(ring,
|
2009-06-05 16:42:42 +04:00
|
|
|
RADEON_ISYNC_ANY2D_IDLE3D |
|
|
|
|
RADEON_ISYNC_ANY3D_IDLE2D |
|
|
|
|
RADEON_ISYNC_WAIT_IDLEGUI |
|
|
|
|
RADEON_ISYNC_CPSCRATCH_IDLEGUI);
|
2011-10-23 14:56:27 +04:00
|
|
|
radeon_ring_write(ring, PACKET0(R300_GB_TILE_CONFIG, 0));
|
|
|
|
radeon_ring_write(ring, gb_tile_config);
|
|
|
|
radeon_ring_write(ring, PACKET0(RADEON_WAIT_UNTIL, 0));
|
|
|
|
radeon_ring_write(ring,
|
2009-06-05 16:42:42 +04:00
|
|
|
RADEON_WAIT_2D_IDLECLEAN |
|
|
|
|
RADEON_WAIT_3D_IDLECLEAN);
|
2011-10-23 14:56:27 +04:00
|
|
|
radeon_ring_write(ring, PACKET0(R300_DST_PIPE_CONFIG, 0));
|
|
|
|
radeon_ring_write(ring, R300_PIPE_AUTO_CONFIG);
|
|
|
|
radeon_ring_write(ring, PACKET0(R300_GB_SELECT, 0));
|
|
|
|
radeon_ring_write(ring, 0);
|
|
|
|
radeon_ring_write(ring, PACKET0(R300_GB_ENABLE, 0));
|
|
|
|
radeon_ring_write(ring, 0);
|
|
|
|
radeon_ring_write(ring, PACKET0(R300_RB3D_DSTCACHE_CTLSTAT, 0));
|
|
|
|
radeon_ring_write(ring, R300_RB3D_DC_FLUSH | R300_RB3D_DC_FREE);
|
|
|
|
radeon_ring_write(ring, PACKET0(R300_RB3D_ZCACHE_CTLSTAT, 0));
|
|
|
|
radeon_ring_write(ring, R300_ZC_FLUSH | R300_ZC_FREE);
|
|
|
|
radeon_ring_write(ring, PACKET0(RADEON_WAIT_UNTIL, 0));
|
|
|
|
radeon_ring_write(ring,
|
2009-06-05 16:42:42 +04:00
|
|
|
RADEON_WAIT_2D_IDLECLEAN |
|
|
|
|
RADEON_WAIT_3D_IDLECLEAN);
|
2011-10-23 14:56:27 +04:00
|
|
|
radeon_ring_write(ring, PACKET0(R300_GB_AA_CONFIG, 0));
|
|
|
|
radeon_ring_write(ring, 0);
|
|
|
|
radeon_ring_write(ring, PACKET0(R300_RB3D_DSTCACHE_CTLSTAT, 0));
|
|
|
|
radeon_ring_write(ring, R300_RB3D_DC_FLUSH | R300_RB3D_DC_FREE);
|
|
|
|
radeon_ring_write(ring, PACKET0(R300_RB3D_ZCACHE_CTLSTAT, 0));
|
|
|
|
radeon_ring_write(ring, R300_ZC_FLUSH | R300_ZC_FREE);
|
|
|
|
radeon_ring_write(ring, PACKET0(R300_GB_MSPOS0, 0));
|
|
|
|
radeon_ring_write(ring,
|
2009-06-05 16:42:42 +04:00
|
|
|
((6 << R300_MS_X0_SHIFT) |
|
|
|
|
(6 << R300_MS_Y0_SHIFT) |
|
|
|
|
(6 << R300_MS_X1_SHIFT) |
|
|
|
|
(6 << R300_MS_Y1_SHIFT) |
|
|
|
|
(6 << R300_MS_X2_SHIFT) |
|
|
|
|
(6 << R300_MS_Y2_SHIFT) |
|
|
|
|
(6 << R300_MSBD0_Y_SHIFT) |
|
|
|
|
(6 << R300_MSBD0_X_SHIFT)));
|
2011-10-23 14:56:27 +04:00
|
|
|
radeon_ring_write(ring, PACKET0(R300_GB_MSPOS1, 0));
|
|
|
|
radeon_ring_write(ring,
|
2009-06-05 16:42:42 +04:00
|
|
|
((6 << R300_MS_X3_SHIFT) |
|
|
|
|
(6 << R300_MS_Y3_SHIFT) |
|
|
|
|
(6 << R300_MS_X4_SHIFT) |
|
|
|
|
(6 << R300_MS_Y4_SHIFT) |
|
|
|
|
(6 << R300_MS_X5_SHIFT) |
|
|
|
|
(6 << R300_MS_Y5_SHIFT) |
|
|
|
|
(6 << R300_MSBD1_SHIFT)));
|
2011-10-23 14:56:27 +04:00
|
|
|
radeon_ring_write(ring, PACKET0(R300_GA_ENHANCE, 0));
|
|
|
|
radeon_ring_write(ring, R300_GA_DEADLOCK_CNTL | R300_GA_FASTSYNC_CNTL);
|
|
|
|
radeon_ring_write(ring, PACKET0(R300_GA_POLY_MODE, 0));
|
|
|
|
radeon_ring_write(ring,
|
2009-06-05 16:42:42 +04:00
|
|
|
R300_FRONT_PTYPE_TRIANGE | R300_BACK_PTYPE_TRIANGE);
|
2011-10-23 14:56:27 +04:00
|
|
|
radeon_ring_write(ring, PACKET0(R300_GA_ROUND_MODE, 0));
|
|
|
|
radeon_ring_write(ring,
|
2009-06-05 16:42:42 +04:00
|
|
|
R300_GEOMETRY_ROUND_NEAREST |
|
|
|
|
R300_COLOR_ROUND_NEAREST);
|
2014-08-18 12:34:55 +04:00
|
|
|
radeon_ring_unlock_commit(rdev, ring, false);
|
2009-06-05 16:42:42 +04:00
|
|
|
}
|
|
|
|
|
2012-08-31 21:43:50 +04:00
|
|
|
static void r300_errata(struct radeon_device *rdev)
|
2009-06-05 16:42:42 +04:00
|
|
|
{
|
|
|
|
rdev->pll_errata = 0;
|
|
|
|
|
|
|
|
if (rdev->family == CHIP_R300 &&
|
|
|
|
(RREG32(RADEON_CONFIG_CNTL) & RADEON_CFG_ATI_REV_ID_MASK) == RADEON_CFG_ATI_REV_A11) {
|
|
|
|
rdev->pll_errata |= CHIP_ERRATA_R300_CG;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
int r300_mc_wait_for_idle(struct radeon_device *rdev)
|
|
|
|
{
|
|
|
|
unsigned i;
|
|
|
|
uint32_t tmp;
|
|
|
|
|
|
|
|
for (i = 0; i < rdev->usec_timeout; i++) {
|
|
|
|
/* read MC_STATUS */
|
2010-02-05 09:58:28 +03:00
|
|
|
tmp = RREG32(RADEON_MC_STATUS);
|
|
|
|
if (tmp & R300_MC_IDLE) {
|
2009-06-05 16:42:42 +04:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
DRM_UDELAY(1);
|
|
|
|
}
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
2012-08-31 21:43:50 +04:00
|
|
|
static void r300_gpu_init(struct radeon_device *rdev)
|
2009-06-05 16:42:42 +04:00
|
|
|
{
|
|
|
|
uint32_t gb_tile_config, tmp;
|
|
|
|
|
2010-04-02 20:59:06 +04:00
|
|
|
if ((rdev->family == CHIP_R300 && rdev->pdev->device != 0x4144) ||
|
2010-04-23 00:57:32 +04:00
|
|
|
(rdev->family == CHIP_R350 && rdev->pdev->device != 0x4148)) {
|
2009-06-05 16:42:42 +04:00
|
|
|
/* r300,r350 */
|
|
|
|
rdev->num_gb_pipes = 2;
|
|
|
|
} else {
|
2010-04-23 00:57:32 +04:00
|
|
|
/* rv350,rv370,rv380,r300 AD, r350 AH */
|
2009-06-05 16:42:42 +04:00
|
|
|
rdev->num_gb_pipes = 1;
|
|
|
|
}
|
2009-08-20 03:11:39 +04:00
|
|
|
rdev->num_z_pipes = 1;
|
2009-06-05 16:42:42 +04:00
|
|
|
gb_tile_config = (R300_ENABLE_TILING | R300_TILE_SIZE_16);
|
|
|
|
switch (rdev->num_gb_pipes) {
|
|
|
|
case 2:
|
|
|
|
gb_tile_config |= R300_PIPE_COUNT_R300;
|
|
|
|
break;
|
|
|
|
case 3:
|
|
|
|
gb_tile_config |= R300_PIPE_COUNT_R420_3P;
|
|
|
|
break;
|
|
|
|
case 4:
|
|
|
|
gb_tile_config |= R300_PIPE_COUNT_R420;
|
|
|
|
break;
|
|
|
|
default:
|
2009-06-17 15:28:30 +04:00
|
|
|
case 1:
|
2009-06-05 16:42:42 +04:00
|
|
|
gb_tile_config |= R300_PIPE_COUNT_RV350;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
WREG32(R300_GB_TILE_CONFIG, gb_tile_config);
|
|
|
|
|
|
|
|
if (r100_gui_wait_for_idle(rdev)) {
|
|
|
|
printk(KERN_WARNING "Failed to wait GUI idle while "
|
|
|
|
"programming pipes. Bad things might happen.\n");
|
|
|
|
}
|
|
|
|
|
2010-02-05 09:58:28 +03:00
|
|
|
tmp = RREG32(R300_DST_PIPE_CONFIG);
|
|
|
|
WREG32(R300_DST_PIPE_CONFIG, tmp | R300_PIPE_AUTO_CONFIG);
|
2009-06-05 16:42:42 +04:00
|
|
|
|
|
|
|
WREG32(R300_RB2D_DSTCACHE_MODE,
|
|
|
|
R300_DC_AUTOFLUSH_ENABLE |
|
|
|
|
R300_DC_DC_DISABLE_IGNORE_PE);
|
|
|
|
|
|
|
|
if (r100_gui_wait_for_idle(rdev)) {
|
|
|
|
printk(KERN_WARNING "Failed to wait GUI idle while "
|
|
|
|
"programming pipes. Bad things might happen.\n");
|
|
|
|
}
|
|
|
|
if (r300_mc_wait_for_idle(rdev)) {
|
|
|
|
printk(KERN_WARNING "Failed to wait MC idle while "
|
|
|
|
"programming pipes. Bad things might happen.\n");
|
|
|
|
}
|
2009-08-20 03:11:39 +04:00
|
|
|
DRM_INFO("radeon: %d quad pipes, %d Z pipes initialized.\n",
|
|
|
|
rdev->num_gb_pipes, rdev->num_z_pipes);
|
2009-06-05 16:42:42 +04:00
|
|
|
}
|
|
|
|
|
2016-03-18 18:58:38 +03:00
|
|
|
int r300_asic_reset(struct radeon_device *rdev, bool hard)
|
2009-06-05 16:42:42 +04:00
|
|
|
{
|
2010-03-09 17:45:12 +03:00
|
|
|
struct r100_mc_save save;
|
|
|
|
u32 status, tmp;
|
2011-01-11 21:36:55 +03:00
|
|
|
int ret = 0;
|
2009-06-05 16:42:42 +04:00
|
|
|
|
2010-03-09 17:45:12 +03:00
|
|
|
status = RREG32(R_000E40_RBBM_STATUS);
|
|
|
|
if (!G_000E40_GUI_ACTIVE(status)) {
|
|
|
|
return 0;
|
2009-06-05 16:42:42 +04:00
|
|
|
}
|
2011-01-11 21:36:55 +03:00
|
|
|
r100_mc_stop(rdev, &save);
|
2010-03-09 17:45:12 +03:00
|
|
|
status = RREG32(R_000E40_RBBM_STATUS);
|
|
|
|
dev_info(rdev->dev, "(%s:%d) RBBM_STATUS=0x%08X\n", __func__, __LINE__, status);
|
|
|
|
/* stop CP */
|
|
|
|
WREG32(RADEON_CP_CSQ_CNTL, 0);
|
|
|
|
tmp = RREG32(RADEON_CP_RB_CNTL);
|
|
|
|
WREG32(RADEON_CP_RB_CNTL, tmp | RADEON_RB_RPTR_WR_ENA);
|
|
|
|
WREG32(RADEON_CP_RB_RPTR_WR, 0);
|
|
|
|
WREG32(RADEON_CP_RB_WPTR, 0);
|
|
|
|
WREG32(RADEON_CP_RB_CNTL, tmp);
|
|
|
|
/* save PCI state */
|
|
|
|
pci_save_state(rdev->pdev);
|
|
|
|
/* disable bus mastering */
|
|
|
|
r100_bm_disable(rdev);
|
|
|
|
WREG32(R_0000F0_RBBM_SOFT_RESET, S_0000F0_SOFT_RESET_VAP(1) |
|
|
|
|
S_0000F0_SOFT_RESET_GA(1));
|
|
|
|
RREG32(R_0000F0_RBBM_SOFT_RESET);
|
|
|
|
mdelay(500);
|
|
|
|
WREG32(R_0000F0_RBBM_SOFT_RESET, 0);
|
|
|
|
mdelay(1);
|
|
|
|
status = RREG32(R_000E40_RBBM_STATUS);
|
|
|
|
dev_info(rdev->dev, "(%s:%d) RBBM_STATUS=0x%08X\n", __func__, __LINE__, status);
|
|
|
|
/* resetting the CP seems to be problematic sometimes it end up
|
2011-03-31 05:57:33 +04:00
|
|
|
* hard locking the computer, but it's necessary for successful
|
2010-03-09 17:45:12 +03:00
|
|
|
* reset more test & playing is needed on R3XX/R4XX to find a
|
|
|
|
* reliable (if any solution)
|
|
|
|
*/
|
|
|
|
WREG32(R_0000F0_RBBM_SOFT_RESET, S_0000F0_SOFT_RESET_CP(1));
|
|
|
|
RREG32(R_0000F0_RBBM_SOFT_RESET);
|
|
|
|
mdelay(500);
|
|
|
|
WREG32(R_0000F0_RBBM_SOFT_RESET, 0);
|
|
|
|
mdelay(1);
|
|
|
|
status = RREG32(R_000E40_RBBM_STATUS);
|
|
|
|
dev_info(rdev->dev, "(%s:%d) RBBM_STATUS=0x%08X\n", __func__, __LINE__, status);
|
|
|
|
/* restore PCI & busmastering */
|
|
|
|
pci_restore_state(rdev->pdev);
|
|
|
|
r100_enable_bm(rdev);
|
2009-06-05 16:42:42 +04:00
|
|
|
/* Check if GPU is idle */
|
2010-03-09 17:45:12 +03:00
|
|
|
if (G_000E40_GA_BUSY(status) || G_000E40_VAP_BUSY(status)) {
|
|
|
|
dev_err(rdev->dev, "failed to reset GPU\n");
|
2011-01-11 21:36:55 +03:00
|
|
|
ret = -1;
|
|
|
|
} else
|
|
|
|
dev_info(rdev->dev, "GPU reset succeed\n");
|
2010-03-09 17:45:12 +03:00
|
|
|
r100_mc_resume(rdev, &save);
|
2011-01-11 21:36:55 +03:00
|
|
|
return ret;
|
2009-06-05 16:42:42 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* r300,r350,rv350,rv380 VRAM info
|
|
|
|
*/
|
drm/radeon/kms: simplify memory controller setup V2
Get rid of _location and use _start/_end also simplify the
computation of vram_start|end & gtt_start|end. For R1XX-R2XX
we place VRAM at the same address of PCI aperture, those GPU
shouldn't have much memory and seems to behave better when
setup that way. For R3XX and newer we place VRAM at 0. For
R6XX-R7XX AGP we place VRAM before or after AGP aperture this
might limit to limit the VRAM size but it's very unlikely.
For IGP we don't change the VRAM placement.
Tested on (compiz,quake3,suspend/resume):
PCI/PCIE:RV280,R420,RV515,RV570,RV610,RV710
AGP:RV100,RV280,R420,RV350,RV620(RPB*),RV730
IGP:RS480(RPB*),RS690,RS780(RPB*),RS880
RPB: resume previously broken
V2 correct commit message to reflect more accurately the bug
and move VRAM placement to 0 for most of the GPU to avoid
limiting VRAM.
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2010-02-18 00:54:29 +03:00
|
|
|
void r300_mc_init(struct radeon_device *rdev)
|
2009-06-05 16:42:42 +04:00
|
|
|
{
|
2010-02-18 17:23:49 +03:00
|
|
|
u64 base;
|
|
|
|
u32 tmp;
|
2009-06-05 16:42:42 +04:00
|
|
|
|
|
|
|
/* DDR for all card after R300 & IGP */
|
|
|
|
rdev->mc.vram_is_ddr = true;
|
|
|
|
tmp = RREG32(RADEON_MEM_CNTL);
|
2010-02-05 06:57:03 +03:00
|
|
|
tmp &= R300_MEM_NUM_CHANNELS_MASK;
|
|
|
|
switch (tmp) {
|
|
|
|
case 0: rdev->mc.vram_width = 64; break;
|
|
|
|
case 1: rdev->mc.vram_width = 128; break;
|
|
|
|
case 2: rdev->mc.vram_width = 256; break;
|
|
|
|
default: rdev->mc.vram_width = 128; break;
|
2009-06-05 16:42:42 +04:00
|
|
|
}
|
2009-07-10 22:44:47 +04:00
|
|
|
r100_vram_init_sizes(rdev);
|
2010-02-18 17:23:49 +03:00
|
|
|
base = rdev->mc.aper_base;
|
|
|
|
if (rdev->flags & RADEON_IS_IGP)
|
|
|
|
base = (RREG32(RADEON_NB_TOM) & 0xffff) << 16;
|
|
|
|
radeon_vram_location(rdev, &rdev->mc, base);
|
2010-07-15 18:51:10 +04:00
|
|
|
rdev->mc.gtt_base_align = 0;
|
drm/radeon/kms: simplify memory controller setup V2
Get rid of _location and use _start/_end also simplify the
computation of vram_start|end & gtt_start|end. For R1XX-R2XX
we place VRAM at the same address of PCI aperture, those GPU
shouldn't have much memory and seems to behave better when
setup that way. For R3XX and newer we place VRAM at 0. For
R6XX-R7XX AGP we place VRAM before or after AGP aperture this
might limit to limit the VRAM size but it's very unlikely.
For IGP we don't change the VRAM placement.
Tested on (compiz,quake3,suspend/resume):
PCI/PCIE:RV280,R420,RV515,RV570,RV610,RV710
AGP:RV100,RV280,R420,RV350,RV620(RPB*),RV730
IGP:RS480(RPB*),RS690,RS780(RPB*),RS880
RPB: resume previously broken
V2 correct commit message to reflect more accurately the bug
and move VRAM placement to 0 for most of the GPU to avoid
limiting VRAM.
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2010-02-18 00:54:29 +03:00
|
|
|
if (!(rdev->flags & RADEON_IS_AGP))
|
|
|
|
radeon_gtt_location(rdev, &rdev->mc);
|
2010-03-17 03:54:38 +03:00
|
|
|
radeon_update_bandwidth_info(rdev);
|
2009-06-05 16:42:42 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
void rv370_set_pcie_lanes(struct radeon_device *rdev, int lanes)
|
|
|
|
{
|
|
|
|
uint32_t link_width_cntl, mask;
|
|
|
|
|
|
|
|
if (rdev->flags & RADEON_IS_IGP)
|
|
|
|
return;
|
|
|
|
|
|
|
|
if (!(rdev->flags & RADEON_IS_PCIE))
|
|
|
|
return;
|
|
|
|
|
|
|
|
/* FIXME wait for idle */
|
|
|
|
|
|
|
|
switch (lanes) {
|
|
|
|
case 0:
|
|
|
|
mask = RADEON_PCIE_LC_LINK_WIDTH_X0;
|
|
|
|
break;
|
|
|
|
case 1:
|
|
|
|
mask = RADEON_PCIE_LC_LINK_WIDTH_X1;
|
|
|
|
break;
|
|
|
|
case 2:
|
|
|
|
mask = RADEON_PCIE_LC_LINK_WIDTH_X2;
|
|
|
|
break;
|
|
|
|
case 4:
|
|
|
|
mask = RADEON_PCIE_LC_LINK_WIDTH_X4;
|
|
|
|
break;
|
|
|
|
case 8:
|
|
|
|
mask = RADEON_PCIE_LC_LINK_WIDTH_X8;
|
|
|
|
break;
|
|
|
|
case 12:
|
|
|
|
mask = RADEON_PCIE_LC_LINK_WIDTH_X12;
|
|
|
|
break;
|
|
|
|
case 16:
|
|
|
|
default:
|
|
|
|
mask = RADEON_PCIE_LC_LINK_WIDTH_X16;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
link_width_cntl = RREG32_PCIE(RADEON_PCIE_LC_LINK_WIDTH_CNTL);
|
|
|
|
|
|
|
|
if ((link_width_cntl & RADEON_PCIE_LC_LINK_WIDTH_RD_MASK) ==
|
|
|
|
(mask << RADEON_PCIE_LC_LINK_WIDTH_RD_SHIFT))
|
|
|
|
return;
|
|
|
|
|
|
|
|
link_width_cntl &= ~(RADEON_PCIE_LC_LINK_WIDTH_MASK |
|
|
|
|
RADEON_PCIE_LC_RECONFIG_NOW |
|
|
|
|
RADEON_PCIE_LC_RECONFIG_LATER |
|
|
|
|
RADEON_PCIE_LC_SHORT_RECONFIG_EN);
|
|
|
|
link_width_cntl |= mask;
|
|
|
|
WREG32_PCIE(RADEON_PCIE_LC_LINK_WIDTH_CNTL, link_width_cntl);
|
|
|
|
WREG32_PCIE(RADEON_PCIE_LC_LINK_WIDTH_CNTL, (link_width_cntl |
|
|
|
|
RADEON_PCIE_LC_RECONFIG_NOW));
|
|
|
|
|
|
|
|
/* wait for lane set to complete */
|
|
|
|
link_width_cntl = RREG32_PCIE(RADEON_PCIE_LC_LINK_WIDTH_CNTL);
|
|
|
|
while (link_width_cntl == 0xffffffff)
|
|
|
|
link_width_cntl = RREG32_PCIE(RADEON_PCIE_LC_LINK_WIDTH_CNTL);
|
|
|
|
|
|
|
|
}
|
|
|
|
|
2009-12-23 18:07:50 +03:00
|
|
|
int rv370_get_pcie_lanes(struct radeon_device *rdev)
|
|
|
|
{
|
|
|
|
u32 link_width_cntl;
|
|
|
|
|
|
|
|
if (rdev->flags & RADEON_IS_IGP)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
if (!(rdev->flags & RADEON_IS_PCIE))
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
/* FIXME wait for idle */
|
|
|
|
|
2011-01-07 02:49:34 +03:00
|
|
|
link_width_cntl = RREG32_PCIE(RADEON_PCIE_LC_LINK_WIDTH_CNTL);
|
2009-12-23 18:07:50 +03:00
|
|
|
|
|
|
|
switch ((link_width_cntl & RADEON_PCIE_LC_LINK_WIDTH_RD_MASK) >> RADEON_PCIE_LC_LINK_WIDTH_RD_SHIFT) {
|
|
|
|
case RADEON_PCIE_LC_LINK_WIDTH_X0:
|
|
|
|
return 0;
|
|
|
|
case RADEON_PCIE_LC_LINK_WIDTH_X1:
|
|
|
|
return 1;
|
|
|
|
case RADEON_PCIE_LC_LINK_WIDTH_X2:
|
|
|
|
return 2;
|
|
|
|
case RADEON_PCIE_LC_LINK_WIDTH_X4:
|
|
|
|
return 4;
|
|
|
|
case RADEON_PCIE_LC_LINK_WIDTH_X8:
|
|
|
|
return 8;
|
|
|
|
case RADEON_PCIE_LC_LINK_WIDTH_X16:
|
|
|
|
default:
|
|
|
|
return 16;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2009-06-05 16:42:42 +04:00
|
|
|
#if defined(CONFIG_DEBUG_FS)
|
|
|
|
static int rv370_debugfs_pcie_gart_info(struct seq_file *m, void *data)
|
|
|
|
{
|
|
|
|
struct drm_info_node *node = (struct drm_info_node *) m->private;
|
|
|
|
struct drm_device *dev = node->minor->dev;
|
|
|
|
struct radeon_device *rdev = dev->dev_private;
|
|
|
|
uint32_t tmp;
|
|
|
|
|
|
|
|
tmp = RREG32_PCIE(RADEON_PCIE_TX_GART_CNTL);
|
|
|
|
seq_printf(m, "PCIE_TX_GART_CNTL 0x%08x\n", tmp);
|
|
|
|
tmp = RREG32_PCIE(RADEON_PCIE_TX_GART_BASE);
|
|
|
|
seq_printf(m, "PCIE_TX_GART_BASE 0x%08x\n", tmp);
|
|
|
|
tmp = RREG32_PCIE(RADEON_PCIE_TX_GART_START_LO);
|
|
|
|
seq_printf(m, "PCIE_TX_GART_START_LO 0x%08x\n", tmp);
|
|
|
|
tmp = RREG32_PCIE(RADEON_PCIE_TX_GART_START_HI);
|
|
|
|
seq_printf(m, "PCIE_TX_GART_START_HI 0x%08x\n", tmp);
|
|
|
|
tmp = RREG32_PCIE(RADEON_PCIE_TX_GART_END_LO);
|
|
|
|
seq_printf(m, "PCIE_TX_GART_END_LO 0x%08x\n", tmp);
|
|
|
|
tmp = RREG32_PCIE(RADEON_PCIE_TX_GART_END_HI);
|
|
|
|
seq_printf(m, "PCIE_TX_GART_END_HI 0x%08x\n", tmp);
|
|
|
|
tmp = RREG32_PCIE(RADEON_PCIE_TX_GART_ERROR);
|
|
|
|
seq_printf(m, "PCIE_TX_GART_ERROR 0x%08x\n", tmp);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static struct drm_info_list rv370_pcie_gart_info_list[] = {
|
|
|
|
{"rv370_pcie_gart_info", rv370_debugfs_pcie_gart_info, 0, NULL},
|
|
|
|
};
|
|
|
|
#endif
|
|
|
|
|
2009-09-30 17:35:32 +04:00
|
|
|
static int rv370_debugfs_pcie_gart_info_init(struct radeon_device *rdev)
|
2009-06-05 16:42:42 +04:00
|
|
|
{
|
|
|
|
#if defined(CONFIG_DEBUG_FS)
|
|
|
|
return radeon_debugfs_add_files(rdev, rv370_pcie_gart_info_list, 1);
|
|
|
|
#else
|
|
|
|
return 0;
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
|
|
|
static int r300_packet0_check(struct radeon_cs_parser *p,
|
|
|
|
struct radeon_cs_packet *pkt,
|
|
|
|
unsigned idx, unsigned reg)
|
|
|
|
{
|
2014-11-27 16:48:42 +03:00
|
|
|
struct radeon_bo_list *reloc;
|
2009-09-01 09:25:57 +04:00
|
|
|
struct r100_cs_track *track;
|
2009-06-05 16:42:42 +04:00
|
|
|
volatile uint32_t *ib;
|
2009-06-24 03:48:08 +04:00
|
|
|
uint32_t tmp, tile_flags = 0;
|
2009-06-05 16:42:42 +04:00
|
|
|
unsigned i;
|
|
|
|
int r;
|
2009-09-23 10:56:27 +04:00
|
|
|
u32 idx_value;
|
2009-06-05 16:42:42 +04:00
|
|
|
|
2012-05-09 17:35:02 +04:00
|
|
|
ib = p->ib.ptr;
|
2009-09-01 09:25:57 +04:00
|
|
|
track = (struct r100_cs_track *)p->track;
|
2009-09-23 10:56:27 +04:00
|
|
|
idx_value = radeon_get_ib_value(p, idx);
|
|
|
|
|
2009-06-17 15:28:30 +04:00
|
|
|
switch(reg) {
|
2009-06-29 05:21:25 +04:00
|
|
|
case AVIVO_D1MODE_VLINE_START_END:
|
|
|
|
case RADEON_CRTC_GUI_TRIG_VLINE:
|
|
|
|
r = r100_cs_packet_parse_vline(p);
|
|
|
|
if (r) {
|
|
|
|
DRM_ERROR("No reloc for ib[%d]=0x%04X\n",
|
|
|
|
idx, reg);
|
2013-01-03 03:27:45 +04:00
|
|
|
radeon_cs_dump_packet(p, pkt);
|
2009-06-29 05:21:25 +04:00
|
|
|
return r;
|
|
|
|
}
|
|
|
|
break;
|
2009-06-05 16:42:42 +04:00
|
|
|
case RADEON_DST_PITCH_OFFSET:
|
|
|
|
case RADEON_SRC_PITCH_OFFSET:
|
2009-09-01 09:25:57 +04:00
|
|
|
r = r100_reloc_pitch_offset(p, pkt, idx, reg);
|
|
|
|
if (r)
|
2009-06-05 16:42:42 +04:00
|
|
|
return r;
|
|
|
|
break;
|
|
|
|
case R300_RB3D_COLOROFFSET0:
|
|
|
|
case R300_RB3D_COLOROFFSET1:
|
|
|
|
case R300_RB3D_COLOROFFSET2:
|
|
|
|
case R300_RB3D_COLOROFFSET3:
|
|
|
|
i = (reg - R300_RB3D_COLOROFFSET0) >> 2;
|
2013-01-03 03:27:47 +04:00
|
|
|
r = radeon_cs_packet_next_reloc(p, &reloc, 0);
|
2009-06-05 16:42:42 +04:00
|
|
|
if (r) {
|
|
|
|
DRM_ERROR("No reloc for ib[%d]=0x%04X\n",
|
|
|
|
idx, reg);
|
2013-01-03 03:27:45 +04:00
|
|
|
radeon_cs_dump_packet(p, pkt);
|
2009-06-05 16:42:42 +04:00
|
|
|
return r;
|
|
|
|
}
|
|
|
|
track->cb[i].robj = reloc->robj;
|
2009-09-23 10:56:27 +04:00
|
|
|
track->cb[i].offset = idx_value;
|
2011-02-12 21:21:35 +03:00
|
|
|
track->cb_dirty = true;
|
2014-03-03 15:38:08 +04:00
|
|
|
ib[idx] = idx_value + ((u32)reloc->gpu_offset);
|
2009-06-05 16:42:42 +04:00
|
|
|
break;
|
|
|
|
case R300_ZB_DEPTHOFFSET:
|
2013-01-03 03:27:47 +04:00
|
|
|
r = radeon_cs_packet_next_reloc(p, &reloc, 0);
|
2009-06-05 16:42:42 +04:00
|
|
|
if (r) {
|
|
|
|
DRM_ERROR("No reloc for ib[%d]=0x%04X\n",
|
|
|
|
idx, reg);
|
2013-01-03 03:27:45 +04:00
|
|
|
radeon_cs_dump_packet(p, pkt);
|
2009-06-05 16:42:42 +04:00
|
|
|
return r;
|
|
|
|
}
|
|
|
|
track->zb.robj = reloc->robj;
|
2009-09-23 10:56:27 +04:00
|
|
|
track->zb.offset = idx_value;
|
2011-02-12 21:21:35 +03:00
|
|
|
track->zb_dirty = true;
|
2014-03-03 15:38:08 +04:00
|
|
|
ib[idx] = idx_value + ((u32)reloc->gpu_offset);
|
2009-06-05 16:42:42 +04:00
|
|
|
break;
|
|
|
|
case R300_TX_OFFSET_0:
|
|
|
|
case R300_TX_OFFSET_0+4:
|
|
|
|
case R300_TX_OFFSET_0+8:
|
|
|
|
case R300_TX_OFFSET_0+12:
|
|
|
|
case R300_TX_OFFSET_0+16:
|
|
|
|
case R300_TX_OFFSET_0+20:
|
|
|
|
case R300_TX_OFFSET_0+24:
|
|
|
|
case R300_TX_OFFSET_0+28:
|
|
|
|
case R300_TX_OFFSET_0+32:
|
|
|
|
case R300_TX_OFFSET_0+36:
|
|
|
|
case R300_TX_OFFSET_0+40:
|
|
|
|
case R300_TX_OFFSET_0+44:
|
|
|
|
case R300_TX_OFFSET_0+48:
|
|
|
|
case R300_TX_OFFSET_0+52:
|
|
|
|
case R300_TX_OFFSET_0+56:
|
|
|
|
case R300_TX_OFFSET_0+60:
|
2009-06-17 15:28:30 +04:00
|
|
|
i = (reg - R300_TX_OFFSET_0) >> 2;
|
2013-01-03 03:27:47 +04:00
|
|
|
r = radeon_cs_packet_next_reloc(p, &reloc, 0);
|
2009-06-05 16:42:42 +04:00
|
|
|
if (r) {
|
|
|
|
DRM_ERROR("No reloc for ib[%d]=0x%04X\n",
|
|
|
|
idx, reg);
|
2013-01-03 03:27:45 +04:00
|
|
|
radeon_cs_dump_packet(p, pkt);
|
2009-06-05 16:42:42 +04:00
|
|
|
return r;
|
|
|
|
}
|
2009-12-16 01:13:08 +03:00
|
|
|
|
drm/radeon: GPU virtual memory support v22
Virtual address space are per drm client (opener of /dev/drm).
Client are in charge of virtual address space, they need to
map bo into it by calling DRM_RADEON_GEM_VA ioctl.
First 16M of virtual address space is reserved by the kernel.
Once using 2 level page table we should be able to have a small
vram memory footprint for each pt (there would be one pt for all
gart, one for all vram and then one first level for each virtual
address space).
Plan include using the sub allocator for a common vm page table
area and using memcpy to copy vm page table in & out. Or use
a gart object and copy things in & out using dma.
v2: agd5f fixes:
- Add vram base offset for vram pages. The GPU physical address of a
vram page is FB_OFFSET + page offset. FB_OFFSET is 0 on discrete
cards and the physical bus address of the stolen memory on
integrated chips.
- VM_CONTEXT1_PROTECTION_FAULT_DEFAULT_ADDR covers all vmid's >= 1
v3: agd5f:
- integrate with the semaphore/multi-ring stuff
v4:
- rebase on top ttm dma & multi-ring stuff
- userspace is now in charge of the address space
- no more specific cs vm ioctl, instead cs ioctl has a new
chunk
v5:
- properly handle mem == NULL case from move_notify callback
- fix the vm cleanup path
v6:
- fix update of page table to only happen on valid mem placement
v7:
- add tlb flush for each vm context
- add flags to define mapping property (readable, writeable, snooped)
- make ring id implicit from ib->fence->ring, up to each asic callback
to then do ring specific scheduling if vm ib scheduling function
v8:
- add query for ib limit and kernel reserved virtual space
- rename vm->size to max_pfn (maximum number of page)
- update gem_va ioctl to also allow unmap operation
- bump kernel version to allow userspace to query for vm support
v9:
- rebuild page table only when bind and incrementaly depending
on bo referenced by cs and that have been moved
- allow virtual address space to grow
- use sa allocator for vram page table
- return invalid when querying vm limit on non cayman GPU
- dump vm fault register on lockup
v10: agd5f:
- Move the vm schedule_ib callback to a standalone function, remove
the callback and use the existing ib_execute callback for VM IBs.
v11:
- rebase on top of lastest Linus
v12: agd5f:
- remove spurious backslash
- set IB vm_id to 0 in radeon_ib_get()
v13: agd5f:
- fix handling of RADEON_CHUNK_ID_FLAGS
v14:
- fix va destruction
- fix suspend resume
- forbid bo to have several different va in same vm
v15:
- rebase
v16:
- cleanup left over of vm init/fini
v17: agd5f:
- cs checker
v18: agd5f:
- reworks the CS ioctl to better support multiple rings and
VM. Rather than adding a new chunk id for VM, just re-use the
IB chunk id and add a new flags for VM mode. Also define additional
dwords for the flags chunk id to define the what ring we want to use
(gfx, compute, uvd, etc.) and the priority.
v19:
- fix cs fini in weird case of no ib
- semi working flush fix for ni
- rebase on top of sa allocator changes
v20: agd5f:
- further CS ioctl cleanups from Christian's comments
v21: agd5f:
- integrate CS checker improvements
v22: agd5f:
- final cleanups for release, only allow VM CS on cayman
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2012-01-06 07:11:05 +04:00
|
|
|
if (p->cs_flags & RADEON_CS_KEEP_TILING_FLAGS) {
|
2011-10-25 03:38:45 +04:00
|
|
|
ib[idx] = (idx_value & 31) | /* keep the 1st 5 bits */
|
2014-03-03 15:38:08 +04:00
|
|
|
((idx_value & ~31) + (u32)reloc->gpu_offset);
|
2011-10-25 03:38:45 +04:00
|
|
|
} else {
|
2014-03-03 15:38:08 +04:00
|
|
|
if (reloc->tiling_flags & RADEON_TILING_MACRO)
|
2011-10-25 03:38:45 +04:00
|
|
|
tile_flags |= R300_TXO_MACRO_TILE;
|
2014-03-03 15:38:08 +04:00
|
|
|
if (reloc->tiling_flags & RADEON_TILING_MICRO)
|
2011-10-25 03:38:45 +04:00
|
|
|
tile_flags |= R300_TXO_MICRO_TILE;
|
2014-03-03 15:38:08 +04:00
|
|
|
else if (reloc->tiling_flags & RADEON_TILING_MICRO_SQUARE)
|
2011-10-25 03:38:45 +04:00
|
|
|
tile_flags |= R300_TXO_MICRO_TILE_SQUARE;
|
|
|
|
|
2014-03-03 15:38:08 +04:00
|
|
|
tmp = idx_value + ((u32)reloc->gpu_offset);
|
2011-10-25 03:38:45 +04:00
|
|
|
tmp |= tile_flags;
|
|
|
|
ib[idx] = tmp;
|
|
|
|
}
|
2009-06-17 15:28:30 +04:00
|
|
|
track->textures[i].robj = reloc->robj;
|
2011-02-12 21:21:35 +03:00
|
|
|
track->tex_dirty = true;
|
2009-06-05 16:42:42 +04:00
|
|
|
break;
|
|
|
|
/* Tracked registers */
|
2009-06-17 15:28:30 +04:00
|
|
|
case 0x2084:
|
|
|
|
/* VAP_VF_CNTL */
|
2009-09-23 10:56:27 +04:00
|
|
|
track->vap_vf_cntl = idx_value;
|
2009-06-17 15:28:30 +04:00
|
|
|
break;
|
|
|
|
case 0x20B4:
|
|
|
|
/* VAP_VTX_SIZE */
|
2009-09-23 10:56:27 +04:00
|
|
|
track->vtx_size = idx_value & 0x7F;
|
2009-06-17 15:28:30 +04:00
|
|
|
break;
|
|
|
|
case 0x2134:
|
|
|
|
/* VAP_VF_MAX_VTX_INDX */
|
2009-09-23 10:56:27 +04:00
|
|
|
track->max_indx = idx_value & 0x00FFFFFFUL;
|
2009-06-17 15:28:30 +04:00
|
|
|
break;
|
2010-02-21 23:24:15 +03:00
|
|
|
case 0x2088:
|
|
|
|
/* VAP_ALT_NUM_VERTICES - only valid on r500 */
|
|
|
|
if (p->rdev->family < CHIP_RV515)
|
|
|
|
goto fail;
|
|
|
|
track->vap_alt_nverts = idx_value & 0xFFFFFF;
|
|
|
|
break;
|
2009-06-05 16:42:42 +04:00
|
|
|
case 0x43E4:
|
|
|
|
/* SC_SCISSOR1 */
|
2009-09-23 10:56:27 +04:00
|
|
|
track->maxy = ((idx_value >> 13) & 0x1FFF) + 1;
|
2009-06-05 16:42:42 +04:00
|
|
|
if (p->rdev->family < CHIP_RV515) {
|
|
|
|
track->maxy -= 1440;
|
|
|
|
}
|
2011-02-12 21:21:35 +03:00
|
|
|
track->cb_dirty = true;
|
|
|
|
track->zb_dirty = true;
|
2009-06-05 16:42:42 +04:00
|
|
|
break;
|
|
|
|
case 0x4E00:
|
|
|
|
/* RB3D_CCTL */
|
2011-01-05 07:46:48 +03:00
|
|
|
if ((idx_value & (1 << 10)) && /* CMASK_ENABLE */
|
|
|
|
p->rdev->cmask_filp != p->filp) {
|
|
|
|
DRM_ERROR("Invalid RB3D_CCTL: Cannot enable CMASK.\n");
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
2009-09-23 10:56:27 +04:00
|
|
|
track->num_cb = ((idx_value >> 5) & 0x3) + 1;
|
2011-02-12 21:21:35 +03:00
|
|
|
track->cb_dirty = true;
|
2009-06-05 16:42:42 +04:00
|
|
|
break;
|
|
|
|
case 0x4E38:
|
|
|
|
case 0x4E3C:
|
|
|
|
case 0x4E40:
|
|
|
|
case 0x4E44:
|
|
|
|
/* RB3D_COLORPITCH0 */
|
|
|
|
/* RB3D_COLORPITCH1 */
|
|
|
|
/* RB3D_COLORPITCH2 */
|
|
|
|
/* RB3D_COLORPITCH3 */
|
drm/radeon: GPU virtual memory support v22
Virtual address space are per drm client (opener of /dev/drm).
Client are in charge of virtual address space, they need to
map bo into it by calling DRM_RADEON_GEM_VA ioctl.
First 16M of virtual address space is reserved by the kernel.
Once using 2 level page table we should be able to have a small
vram memory footprint for each pt (there would be one pt for all
gart, one for all vram and then one first level for each virtual
address space).
Plan include using the sub allocator for a common vm page table
area and using memcpy to copy vm page table in & out. Or use
a gart object and copy things in & out using dma.
v2: agd5f fixes:
- Add vram base offset for vram pages. The GPU physical address of a
vram page is FB_OFFSET + page offset. FB_OFFSET is 0 on discrete
cards and the physical bus address of the stolen memory on
integrated chips.
- VM_CONTEXT1_PROTECTION_FAULT_DEFAULT_ADDR covers all vmid's >= 1
v3: agd5f:
- integrate with the semaphore/multi-ring stuff
v4:
- rebase on top ttm dma & multi-ring stuff
- userspace is now in charge of the address space
- no more specific cs vm ioctl, instead cs ioctl has a new
chunk
v5:
- properly handle mem == NULL case from move_notify callback
- fix the vm cleanup path
v6:
- fix update of page table to only happen on valid mem placement
v7:
- add tlb flush for each vm context
- add flags to define mapping property (readable, writeable, snooped)
- make ring id implicit from ib->fence->ring, up to each asic callback
to then do ring specific scheduling if vm ib scheduling function
v8:
- add query for ib limit and kernel reserved virtual space
- rename vm->size to max_pfn (maximum number of page)
- update gem_va ioctl to also allow unmap operation
- bump kernel version to allow userspace to query for vm support
v9:
- rebuild page table only when bind and incrementaly depending
on bo referenced by cs and that have been moved
- allow virtual address space to grow
- use sa allocator for vram page table
- return invalid when querying vm limit on non cayman GPU
- dump vm fault register on lockup
v10: agd5f:
- Move the vm schedule_ib callback to a standalone function, remove
the callback and use the existing ib_execute callback for VM IBs.
v11:
- rebase on top of lastest Linus
v12: agd5f:
- remove spurious backslash
- set IB vm_id to 0 in radeon_ib_get()
v13: agd5f:
- fix handling of RADEON_CHUNK_ID_FLAGS
v14:
- fix va destruction
- fix suspend resume
- forbid bo to have several different va in same vm
v15:
- rebase
v16:
- cleanup left over of vm init/fini
v17: agd5f:
- cs checker
v18: agd5f:
- reworks the CS ioctl to better support multiple rings and
VM. Rather than adding a new chunk id for VM, just re-use the
IB chunk id and add a new flags for VM mode. Also define additional
dwords for the flags chunk id to define the what ring we want to use
(gfx, compute, uvd, etc.) and the priority.
v19:
- fix cs fini in weird case of no ib
- semi working flush fix for ni
- rebase on top of sa allocator changes
v20: agd5f:
- further CS ioctl cleanups from Christian's comments
v21: agd5f:
- integrate CS checker improvements
v22: agd5f:
- final cleanups for release, only allow VM CS on cayman
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2012-01-06 07:11:05 +04:00
|
|
|
if (!(p->cs_flags & RADEON_CS_KEEP_TILING_FLAGS)) {
|
2013-01-03 03:27:47 +04:00
|
|
|
r = radeon_cs_packet_next_reloc(p, &reloc, 0);
|
2011-10-25 03:38:45 +04:00
|
|
|
if (r) {
|
|
|
|
DRM_ERROR("No reloc for ib[%d]=0x%04X\n",
|
|
|
|
idx, reg);
|
2013-01-03 03:27:45 +04:00
|
|
|
radeon_cs_dump_packet(p, pkt);
|
2011-10-25 03:38:45 +04:00
|
|
|
return r;
|
|
|
|
}
|
2009-06-24 03:48:08 +04:00
|
|
|
|
2014-03-03 15:38:08 +04:00
|
|
|
if (reloc->tiling_flags & RADEON_TILING_MACRO)
|
2011-10-25 03:38:45 +04:00
|
|
|
tile_flags |= R300_COLOR_TILE_ENABLE;
|
2014-03-03 15:38:08 +04:00
|
|
|
if (reloc->tiling_flags & RADEON_TILING_MICRO)
|
2011-10-25 03:38:45 +04:00
|
|
|
tile_flags |= R300_COLOR_MICROTILE_ENABLE;
|
2014-03-03 15:38:08 +04:00
|
|
|
else if (reloc->tiling_flags & RADEON_TILING_MICRO_SQUARE)
|
2011-10-25 03:38:45 +04:00
|
|
|
tile_flags |= R300_COLOR_MICROTILE_SQUARE_ENABLE;
|
2009-06-24 03:48:08 +04:00
|
|
|
|
2011-10-25 03:38:45 +04:00
|
|
|
tmp = idx_value & ~(0x7 << 16);
|
|
|
|
tmp |= tile_flags;
|
|
|
|
ib[idx] = tmp;
|
|
|
|
}
|
2009-06-05 16:42:42 +04:00
|
|
|
i = (reg - 0x4E38) >> 2;
|
2009-09-23 10:56:27 +04:00
|
|
|
track->cb[i].pitch = idx_value & 0x3FFE;
|
|
|
|
switch (((idx_value >> 21) & 0xF)) {
|
2009-06-05 16:42:42 +04:00
|
|
|
case 9:
|
|
|
|
case 11:
|
|
|
|
case 12:
|
|
|
|
track->cb[i].cpp = 1;
|
|
|
|
break;
|
|
|
|
case 3:
|
|
|
|
case 4:
|
|
|
|
case 13:
|
|
|
|
case 15:
|
|
|
|
track->cb[i].cpp = 2;
|
|
|
|
break;
|
2010-12-21 23:27:34 +03:00
|
|
|
case 5:
|
|
|
|
if (p->rdev->family < CHIP_RV515) {
|
|
|
|
DRM_ERROR("Invalid color buffer format (%d)!\n",
|
|
|
|
((idx_value >> 21) & 0xF));
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
/* Pass through. */
|
2009-06-05 16:42:42 +04:00
|
|
|
case 6:
|
|
|
|
track->cb[i].cpp = 4;
|
|
|
|
break;
|
|
|
|
case 10:
|
|
|
|
track->cb[i].cpp = 8;
|
|
|
|
break;
|
|
|
|
case 7:
|
|
|
|
track->cb[i].cpp = 16;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
DRM_ERROR("Invalid color buffer format (%d) !\n",
|
2009-09-23 10:56:27 +04:00
|
|
|
((idx_value >> 21) & 0xF));
|
2009-06-05 16:42:42 +04:00
|
|
|
return -EINVAL;
|
|
|
|
}
|
2011-02-12 21:21:35 +03:00
|
|
|
track->cb_dirty = true;
|
2009-06-05 16:42:42 +04:00
|
|
|
break;
|
|
|
|
case 0x4F00:
|
|
|
|
/* ZB_CNTL */
|
2009-09-23 10:56:27 +04:00
|
|
|
if (idx_value & 2) {
|
2009-06-05 16:42:42 +04:00
|
|
|
track->z_enabled = true;
|
|
|
|
} else {
|
|
|
|
track->z_enabled = false;
|
|
|
|
}
|
2011-02-12 21:21:35 +03:00
|
|
|
track->zb_dirty = true;
|
2009-06-05 16:42:42 +04:00
|
|
|
break;
|
|
|
|
case 0x4F10:
|
|
|
|
/* ZB_FORMAT */
|
2009-09-23 10:56:27 +04:00
|
|
|
switch ((idx_value & 0xF)) {
|
2009-06-05 16:42:42 +04:00
|
|
|
case 0:
|
|
|
|
case 1:
|
|
|
|
track->zb.cpp = 2;
|
|
|
|
break;
|
|
|
|
case 2:
|
|
|
|
track->zb.cpp = 4;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
DRM_ERROR("Invalid z buffer format (%d) !\n",
|
2009-09-23 10:56:27 +04:00
|
|
|
(idx_value & 0xF));
|
2009-06-05 16:42:42 +04:00
|
|
|
return -EINVAL;
|
|
|
|
}
|
2011-02-12 21:21:35 +03:00
|
|
|
track->zb_dirty = true;
|
2009-06-05 16:42:42 +04:00
|
|
|
break;
|
|
|
|
case 0x4F24:
|
|
|
|
/* ZB_DEPTHPITCH */
|
drm/radeon: GPU virtual memory support v22
Virtual address space are per drm client (opener of /dev/drm).
Client are in charge of virtual address space, they need to
map bo into it by calling DRM_RADEON_GEM_VA ioctl.
First 16M of virtual address space is reserved by the kernel.
Once using 2 level page table we should be able to have a small
vram memory footprint for each pt (there would be one pt for all
gart, one for all vram and then one first level for each virtual
address space).
Plan include using the sub allocator for a common vm page table
area and using memcpy to copy vm page table in & out. Or use
a gart object and copy things in & out using dma.
v2: agd5f fixes:
- Add vram base offset for vram pages. The GPU physical address of a
vram page is FB_OFFSET + page offset. FB_OFFSET is 0 on discrete
cards and the physical bus address of the stolen memory on
integrated chips.
- VM_CONTEXT1_PROTECTION_FAULT_DEFAULT_ADDR covers all vmid's >= 1
v3: agd5f:
- integrate with the semaphore/multi-ring stuff
v4:
- rebase on top ttm dma & multi-ring stuff
- userspace is now in charge of the address space
- no more specific cs vm ioctl, instead cs ioctl has a new
chunk
v5:
- properly handle mem == NULL case from move_notify callback
- fix the vm cleanup path
v6:
- fix update of page table to only happen on valid mem placement
v7:
- add tlb flush for each vm context
- add flags to define mapping property (readable, writeable, snooped)
- make ring id implicit from ib->fence->ring, up to each asic callback
to then do ring specific scheduling if vm ib scheduling function
v8:
- add query for ib limit and kernel reserved virtual space
- rename vm->size to max_pfn (maximum number of page)
- update gem_va ioctl to also allow unmap operation
- bump kernel version to allow userspace to query for vm support
v9:
- rebuild page table only when bind and incrementaly depending
on bo referenced by cs and that have been moved
- allow virtual address space to grow
- use sa allocator for vram page table
- return invalid when querying vm limit on non cayman GPU
- dump vm fault register on lockup
v10: agd5f:
- Move the vm schedule_ib callback to a standalone function, remove
the callback and use the existing ib_execute callback for VM IBs.
v11:
- rebase on top of lastest Linus
v12: agd5f:
- remove spurious backslash
- set IB vm_id to 0 in radeon_ib_get()
v13: agd5f:
- fix handling of RADEON_CHUNK_ID_FLAGS
v14:
- fix va destruction
- fix suspend resume
- forbid bo to have several different va in same vm
v15:
- rebase
v16:
- cleanup left over of vm init/fini
v17: agd5f:
- cs checker
v18: agd5f:
- reworks the CS ioctl to better support multiple rings and
VM. Rather than adding a new chunk id for VM, just re-use the
IB chunk id and add a new flags for VM mode. Also define additional
dwords for the flags chunk id to define the what ring we want to use
(gfx, compute, uvd, etc.) and the priority.
v19:
- fix cs fini in weird case of no ib
- semi working flush fix for ni
- rebase on top of sa allocator changes
v20: agd5f:
- further CS ioctl cleanups from Christian's comments
v21: agd5f:
- integrate CS checker improvements
v22: agd5f:
- final cleanups for release, only allow VM CS on cayman
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2012-01-06 07:11:05 +04:00
|
|
|
if (!(p->cs_flags & RADEON_CS_KEEP_TILING_FLAGS)) {
|
2013-01-03 03:27:47 +04:00
|
|
|
r = radeon_cs_packet_next_reloc(p, &reloc, 0);
|
2011-10-25 03:38:45 +04:00
|
|
|
if (r) {
|
|
|
|
DRM_ERROR("No reloc for ib[%d]=0x%04X\n",
|
|
|
|
idx, reg);
|
2013-01-03 03:27:45 +04:00
|
|
|
radeon_cs_dump_packet(p, pkt);
|
2011-10-25 03:38:45 +04:00
|
|
|
return r;
|
|
|
|
}
|
2009-06-24 03:48:08 +04:00
|
|
|
|
2014-03-03 15:38:08 +04:00
|
|
|
if (reloc->tiling_flags & RADEON_TILING_MACRO)
|
2011-10-25 03:38:45 +04:00
|
|
|
tile_flags |= R300_DEPTHMACROTILE_ENABLE;
|
2014-03-03 15:38:08 +04:00
|
|
|
if (reloc->tiling_flags & RADEON_TILING_MICRO)
|
2011-10-25 03:38:45 +04:00
|
|
|
tile_flags |= R300_DEPTHMICROTILE_TILED;
|
2014-03-03 15:38:08 +04:00
|
|
|
else if (reloc->tiling_flags & RADEON_TILING_MICRO_SQUARE)
|
2011-10-25 03:38:45 +04:00
|
|
|
tile_flags |= R300_DEPTHMICROTILE_TILED_SQUARE;
|
2009-06-24 03:48:08 +04:00
|
|
|
|
2011-10-25 03:38:45 +04:00
|
|
|
tmp = idx_value & ~(0x7 << 16);
|
|
|
|
tmp |= tile_flags;
|
|
|
|
ib[idx] = tmp;
|
|
|
|
}
|
2009-09-23 10:56:27 +04:00
|
|
|
track->zb.pitch = idx_value & 0x3FFC;
|
2011-02-12 21:21:35 +03:00
|
|
|
track->zb_dirty = true;
|
2009-06-05 16:42:42 +04:00
|
|
|
break;
|
2009-06-17 15:28:30 +04:00
|
|
|
case 0x4104:
|
2011-02-14 03:01:09 +03:00
|
|
|
/* TX_ENABLE */
|
2009-06-17 15:28:30 +04:00
|
|
|
for (i = 0; i < 16; i++) {
|
|
|
|
bool enabled;
|
|
|
|
|
2009-09-23 10:56:27 +04:00
|
|
|
enabled = !!(idx_value & (1 << i));
|
2009-06-17 15:28:30 +04:00
|
|
|
track->textures[i].enabled = enabled;
|
|
|
|
}
|
2011-02-12 21:21:35 +03:00
|
|
|
track->tex_dirty = true;
|
2009-06-17 15:28:30 +04:00
|
|
|
break;
|
|
|
|
case 0x44C0:
|
|
|
|
case 0x44C4:
|
|
|
|
case 0x44C8:
|
|
|
|
case 0x44CC:
|
|
|
|
case 0x44D0:
|
|
|
|
case 0x44D4:
|
|
|
|
case 0x44D8:
|
|
|
|
case 0x44DC:
|
|
|
|
case 0x44E0:
|
|
|
|
case 0x44E4:
|
|
|
|
case 0x44E8:
|
|
|
|
case 0x44EC:
|
|
|
|
case 0x44F0:
|
|
|
|
case 0x44F4:
|
|
|
|
case 0x44F8:
|
|
|
|
case 0x44FC:
|
|
|
|
/* TX_FORMAT1_[0-15] */
|
|
|
|
i = (reg - 0x44C0) >> 2;
|
2009-09-23 10:56:27 +04:00
|
|
|
tmp = (idx_value >> 25) & 0x3;
|
2009-06-17 15:28:30 +04:00
|
|
|
track->textures[i].tex_coord_type = tmp;
|
2009-09-23 10:56:27 +04:00
|
|
|
switch ((idx_value & 0x1F)) {
|
2009-09-01 09:25:57 +04:00
|
|
|
case R300_TX_FORMAT_X8:
|
|
|
|
case R300_TX_FORMAT_Y4X4:
|
|
|
|
case R300_TX_FORMAT_Z3Y3X2:
|
2009-06-17 15:28:30 +04:00
|
|
|
track->textures[i].cpp = 1;
|
2010-06-12 20:12:37 +04:00
|
|
|
track->textures[i].compress_format = R100_TRACK_COMP_NONE;
|
2009-06-17 15:28:30 +04:00
|
|
|
break;
|
2009-09-01 09:25:57 +04:00
|
|
|
case R300_TX_FORMAT_X16:
|
2011-02-16 04:26:08 +03:00
|
|
|
case R300_TX_FORMAT_FL_I16:
|
2009-09-01 09:25:57 +04:00
|
|
|
case R300_TX_FORMAT_Y8X8:
|
|
|
|
case R300_TX_FORMAT_Z5Y6X5:
|
|
|
|
case R300_TX_FORMAT_Z6Y5X5:
|
|
|
|
case R300_TX_FORMAT_W4Z4Y4X4:
|
|
|
|
case R300_TX_FORMAT_W1Z5Y5X5:
|
|
|
|
case R300_TX_FORMAT_D3DMFT_CxV8U8:
|
|
|
|
case R300_TX_FORMAT_B8G8_B8G8:
|
|
|
|
case R300_TX_FORMAT_G8R8_G8B8:
|
2009-06-17 15:28:30 +04:00
|
|
|
track->textures[i].cpp = 2;
|
2010-06-12 20:12:37 +04:00
|
|
|
track->textures[i].compress_format = R100_TRACK_COMP_NONE;
|
2009-06-17 15:28:30 +04:00
|
|
|
break;
|
2009-09-01 09:25:57 +04:00
|
|
|
case R300_TX_FORMAT_Y16X16:
|
2011-02-16 04:26:08 +03:00
|
|
|
case R300_TX_FORMAT_FL_I16A16:
|
2009-09-01 09:25:57 +04:00
|
|
|
case R300_TX_FORMAT_Z11Y11X10:
|
|
|
|
case R300_TX_FORMAT_Z10Y11X11:
|
|
|
|
case R300_TX_FORMAT_W8Z8Y8X8:
|
|
|
|
case R300_TX_FORMAT_W2Z10Y10X10:
|
|
|
|
case 0x17:
|
|
|
|
case R300_TX_FORMAT_FL_I32:
|
|
|
|
case 0x1e:
|
2009-06-17 15:28:30 +04:00
|
|
|
track->textures[i].cpp = 4;
|
2010-06-12 20:12:37 +04:00
|
|
|
track->textures[i].compress_format = R100_TRACK_COMP_NONE;
|
2009-06-17 15:28:30 +04:00
|
|
|
break;
|
2009-09-01 09:25:57 +04:00
|
|
|
case R300_TX_FORMAT_W16Z16Y16X16:
|
|
|
|
case R300_TX_FORMAT_FL_R16G16B16A16:
|
|
|
|
case R300_TX_FORMAT_FL_I32A32:
|
2009-06-17 15:28:30 +04:00
|
|
|
track->textures[i].cpp = 8;
|
2010-06-12 20:12:37 +04:00
|
|
|
track->textures[i].compress_format = R100_TRACK_COMP_NONE;
|
2009-06-17 15:28:30 +04:00
|
|
|
break;
|
2009-09-01 09:25:57 +04:00
|
|
|
case R300_TX_FORMAT_FL_R32G32B32A32:
|
2009-06-17 15:28:30 +04:00
|
|
|
track->textures[i].cpp = 16;
|
2010-06-12 20:12:37 +04:00
|
|
|
track->textures[i].compress_format = R100_TRACK_COMP_NONE;
|
2009-06-17 15:28:30 +04:00
|
|
|
break;
|
2009-12-07 06:16:06 +03:00
|
|
|
case R300_TX_FORMAT_DXT1:
|
|
|
|
track->textures[i].cpp = 1;
|
|
|
|
track->textures[i].compress_format = R100_TRACK_COMP_DXT1;
|
|
|
|
break;
|
2009-12-19 02:23:00 +03:00
|
|
|
case R300_TX_FORMAT_ATI2N:
|
|
|
|
if (p->rdev->family < CHIP_R420) {
|
|
|
|
DRM_ERROR("Invalid texture format %u\n",
|
|
|
|
(idx_value & 0x1F));
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
/* The same rules apply as for DXT3/5. */
|
|
|
|
/* Pass through. */
|
2009-12-07 06:16:06 +03:00
|
|
|
case R300_TX_FORMAT_DXT3:
|
|
|
|
case R300_TX_FORMAT_DXT5:
|
|
|
|
track->textures[i].cpp = 1;
|
|
|
|
track->textures[i].compress_format = R100_TRACK_COMP_DXT35;
|
|
|
|
break;
|
2009-06-17 15:28:30 +04:00
|
|
|
default:
|
|
|
|
DRM_ERROR("Invalid texture format %u\n",
|
2009-09-23 10:56:27 +04:00
|
|
|
(idx_value & 0x1F));
|
2009-06-17 15:28:30 +04:00
|
|
|
return -EINVAL;
|
|
|
|
}
|
2011-02-12 21:21:35 +03:00
|
|
|
track->tex_dirty = true;
|
2009-06-17 15:28:30 +04:00
|
|
|
break;
|
|
|
|
case 0x4400:
|
|
|
|
case 0x4404:
|
|
|
|
case 0x4408:
|
|
|
|
case 0x440C:
|
|
|
|
case 0x4410:
|
|
|
|
case 0x4414:
|
|
|
|
case 0x4418:
|
|
|
|
case 0x441C:
|
|
|
|
case 0x4420:
|
|
|
|
case 0x4424:
|
|
|
|
case 0x4428:
|
|
|
|
case 0x442C:
|
|
|
|
case 0x4430:
|
|
|
|
case 0x4434:
|
|
|
|
case 0x4438:
|
|
|
|
case 0x443C:
|
|
|
|
/* TX_FILTER0_[0-15] */
|
|
|
|
i = (reg - 0x4400) >> 2;
|
2009-09-23 10:56:27 +04:00
|
|
|
tmp = idx_value & 0x7;
|
2009-06-17 15:28:30 +04:00
|
|
|
if (tmp == 2 || tmp == 4 || tmp == 6) {
|
|
|
|
track->textures[i].roundup_w = false;
|
|
|
|
}
|
2009-09-23 10:56:27 +04:00
|
|
|
tmp = (idx_value >> 3) & 0x7;
|
2009-06-17 15:28:30 +04:00
|
|
|
if (tmp == 2 || tmp == 4 || tmp == 6) {
|
|
|
|
track->textures[i].roundup_h = false;
|
|
|
|
}
|
2011-02-12 21:21:35 +03:00
|
|
|
track->tex_dirty = true;
|
2009-06-17 15:28:30 +04:00
|
|
|
break;
|
|
|
|
case 0x4500:
|
|
|
|
case 0x4504:
|
|
|
|
case 0x4508:
|
|
|
|
case 0x450C:
|
|
|
|
case 0x4510:
|
|
|
|
case 0x4514:
|
|
|
|
case 0x4518:
|
|
|
|
case 0x451C:
|
|
|
|
case 0x4520:
|
|
|
|
case 0x4524:
|
|
|
|
case 0x4528:
|
|
|
|
case 0x452C:
|
|
|
|
case 0x4530:
|
|
|
|
case 0x4534:
|
|
|
|
case 0x4538:
|
|
|
|
case 0x453C:
|
|
|
|
/* TX_FORMAT2_[0-15] */
|
|
|
|
i = (reg - 0x4500) >> 2;
|
2009-09-23 10:56:27 +04:00
|
|
|
tmp = idx_value & 0x3FFF;
|
2009-06-17 15:28:30 +04:00
|
|
|
track->textures[i].pitch = tmp + 1;
|
|
|
|
if (p->rdev->family >= CHIP_RV515) {
|
2009-09-23 10:56:27 +04:00
|
|
|
tmp = ((idx_value >> 15) & 1) << 11;
|
2009-06-17 15:28:30 +04:00
|
|
|
track->textures[i].width_11 = tmp;
|
2009-09-23 10:56:27 +04:00
|
|
|
tmp = ((idx_value >> 16) & 1) << 11;
|
2009-06-17 15:28:30 +04:00
|
|
|
track->textures[i].height_11 = tmp;
|
2009-12-19 02:23:00 +03:00
|
|
|
|
|
|
|
/* ATI1N */
|
|
|
|
if (idx_value & (1 << 14)) {
|
|
|
|
/* The same rules apply as for DXT1. */
|
|
|
|
track->textures[i].compress_format =
|
|
|
|
R100_TRACK_COMP_DXT1;
|
|
|
|
}
|
|
|
|
} else if (idx_value & (1 << 14)) {
|
|
|
|
DRM_ERROR("Forbidden bit TXFORMAT_MSB\n");
|
|
|
|
return -EINVAL;
|
2009-06-17 15:28:30 +04:00
|
|
|
}
|
2011-02-12 21:21:35 +03:00
|
|
|
track->tex_dirty = true;
|
2009-06-17 15:28:30 +04:00
|
|
|
break;
|
|
|
|
case 0x4480:
|
|
|
|
case 0x4484:
|
|
|
|
case 0x4488:
|
|
|
|
case 0x448C:
|
|
|
|
case 0x4490:
|
|
|
|
case 0x4494:
|
|
|
|
case 0x4498:
|
|
|
|
case 0x449C:
|
|
|
|
case 0x44A0:
|
|
|
|
case 0x44A4:
|
|
|
|
case 0x44A8:
|
|
|
|
case 0x44AC:
|
|
|
|
case 0x44B0:
|
|
|
|
case 0x44B4:
|
|
|
|
case 0x44B8:
|
|
|
|
case 0x44BC:
|
|
|
|
/* TX_FORMAT0_[0-15] */
|
|
|
|
i = (reg - 0x4480) >> 2;
|
2009-09-23 10:56:27 +04:00
|
|
|
tmp = idx_value & 0x7FF;
|
2009-06-17 15:28:30 +04:00
|
|
|
track->textures[i].width = tmp + 1;
|
2009-09-23 10:56:27 +04:00
|
|
|
tmp = (idx_value >> 11) & 0x7FF;
|
2009-06-17 15:28:30 +04:00
|
|
|
track->textures[i].height = tmp + 1;
|
2009-09-23 10:56:27 +04:00
|
|
|
tmp = (idx_value >> 26) & 0xF;
|
2009-06-17 15:28:30 +04:00
|
|
|
track->textures[i].num_levels = tmp;
|
2009-09-23 10:56:27 +04:00
|
|
|
tmp = idx_value & (1 << 31);
|
2009-06-17 15:28:30 +04:00
|
|
|
track->textures[i].use_pitch = !!tmp;
|
2009-09-23 10:56:27 +04:00
|
|
|
tmp = (idx_value >> 22) & 0xF;
|
2009-06-17 15:28:30 +04:00
|
|
|
track->textures[i].txdepth = tmp;
|
2011-02-12 21:21:35 +03:00
|
|
|
track->tex_dirty = true;
|
2009-06-17 15:28:30 +04:00
|
|
|
break;
|
2009-08-15 14:54:13 +04:00
|
|
|
case R300_ZB_ZPASS_ADDR:
|
2013-01-03 03:27:47 +04:00
|
|
|
r = radeon_cs_packet_next_reloc(p, &reloc, 0);
|
2009-08-15 14:54:13 +04:00
|
|
|
if (r) {
|
|
|
|
DRM_ERROR("No reloc for ib[%d]=0x%04X\n",
|
|
|
|
idx, reg);
|
2013-01-03 03:27:45 +04:00
|
|
|
radeon_cs_dump_packet(p, pkt);
|
2009-08-15 14:54:13 +04:00
|
|
|
return r;
|
|
|
|
}
|
2014-03-03 15:38:08 +04:00
|
|
|
ib[idx] = idx_value + ((u32)reloc->gpu_offset);
|
2009-08-15 14:54:13 +04:00
|
|
|
break;
|
2009-12-17 08:02:28 +03:00
|
|
|
case 0x4e0c:
|
|
|
|
/* RB3D_COLOR_CHANNEL_MASK */
|
|
|
|
track->color_channel_mask = idx_value;
|
2011-02-12 21:21:35 +03:00
|
|
|
track->cb_dirty = true;
|
2009-12-17 08:02:28 +03:00
|
|
|
break;
|
2010-07-13 05:11:11 +04:00
|
|
|
case 0x43a4:
|
|
|
|
/* SC_HYPERZ_EN */
|
|
|
|
/* r300c emits this register - we need to disable hyperz for it
|
|
|
|
* without complaining */
|
|
|
|
if (p->rdev->hyperz_filp != p->filp) {
|
|
|
|
if (idx_value & 0x1)
|
|
|
|
ib[idx] = idx_value & ~1;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
case 0x4f1c:
|
2009-12-17 08:02:28 +03:00
|
|
|
/* ZB_BW_CNTL */
|
2010-04-13 04:33:36 +04:00
|
|
|
track->zb_cb_clear = !!(idx_value & (1 << 5));
|
2011-02-12 21:21:35 +03:00
|
|
|
track->cb_dirty = true;
|
|
|
|
track->zb_dirty = true;
|
2010-07-13 05:11:11 +04:00
|
|
|
if (p->rdev->hyperz_filp != p->filp) {
|
|
|
|
if (idx_value & (R300_HIZ_ENABLE |
|
|
|
|
R300_RD_COMP_ENABLE |
|
|
|
|
R300_WR_COMP_ENABLE |
|
|
|
|
R300_FAST_FILL_ENABLE))
|
|
|
|
goto fail;
|
|
|
|
}
|
2009-12-17 08:02:28 +03:00
|
|
|
break;
|
|
|
|
case 0x4e04:
|
|
|
|
/* RB3D_BLENDCNTL */
|
|
|
|
track->blend_read_enable = !!(idx_value & (1 << 2));
|
2011-02-12 21:21:35 +03:00
|
|
|
track->cb_dirty = true;
|
2009-12-17 08:02:28 +03:00
|
|
|
break;
|
2011-02-14 03:01:10 +03:00
|
|
|
case R300_RB3D_AARESOLVE_OFFSET:
|
2013-01-03 03:27:47 +04:00
|
|
|
r = radeon_cs_packet_next_reloc(p, &reloc, 0);
|
2011-02-14 03:01:10 +03:00
|
|
|
if (r) {
|
|
|
|
DRM_ERROR("No reloc for ib[%d]=0x%04X\n",
|
|
|
|
idx, reg);
|
2013-01-03 03:27:45 +04:00
|
|
|
radeon_cs_dump_packet(p, pkt);
|
2011-02-14 03:01:10 +03:00
|
|
|
return r;
|
|
|
|
}
|
|
|
|
track->aa.robj = reloc->robj;
|
|
|
|
track->aa.offset = idx_value;
|
|
|
|
track->aa_dirty = true;
|
2014-03-03 15:38:08 +04:00
|
|
|
ib[idx] = idx_value + ((u32)reloc->gpu_offset);
|
2011-02-14 03:01:10 +03:00
|
|
|
break;
|
|
|
|
case R300_RB3D_AARESOLVE_PITCH:
|
|
|
|
track->aa.pitch = idx_value & 0x3FFE;
|
|
|
|
track->aa_dirty = true;
|
|
|
|
break;
|
|
|
|
case R300_RB3D_AARESOLVE_CTL:
|
|
|
|
track->aaresolve = idx_value & 0x1;
|
|
|
|
track->aa_dirty = true;
|
|
|
|
break;
|
2010-07-13 05:11:11 +04:00
|
|
|
case 0x4f30: /* ZB_MASK_OFFSET */
|
|
|
|
case 0x4f34: /* ZB_ZMASK_PITCH */
|
|
|
|
case 0x4f44: /* ZB_HIZ_OFFSET */
|
|
|
|
case 0x4f54: /* ZB_HIZ_PITCH */
|
|
|
|
if (idx_value && (p->rdev->hyperz_filp != p->filp))
|
|
|
|
goto fail;
|
|
|
|
break;
|
|
|
|
case 0x4028:
|
|
|
|
if (idx_value && (p->rdev->hyperz_filp != p->filp))
|
|
|
|
goto fail;
|
|
|
|
/* GB_Z_PEQ_CONFIG */
|
|
|
|
if (p->rdev->family >= CHIP_RV350)
|
|
|
|
break;
|
|
|
|
goto fail;
|
|
|
|
break;
|
2009-08-15 14:54:13 +04:00
|
|
|
case 0x4be8:
|
|
|
|
/* valid register only on RV530 */
|
|
|
|
if (p->rdev->family == CHIP_RV530)
|
|
|
|
break;
|
|
|
|
/* fallthrough do not move */
|
2009-06-05 16:42:42 +04:00
|
|
|
default:
|
2010-02-21 23:24:15 +03:00
|
|
|
goto fail;
|
2009-06-05 16:42:42 +04:00
|
|
|
}
|
|
|
|
return 0;
|
2010-02-21 23:24:15 +03:00
|
|
|
fail:
|
2010-07-13 05:11:11 +04:00
|
|
|
printk(KERN_ERR "Forbidden register 0x%04X in cs at %d (val=%08x)\n",
|
|
|
|
reg, idx, idx_value);
|
2010-02-21 23:24:15 +03:00
|
|
|
return -EINVAL;
|
2009-06-05 16:42:42 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
static int r300_packet3_check(struct radeon_cs_parser *p,
|
|
|
|
struct radeon_cs_packet *pkt)
|
|
|
|
{
|
2014-11-27 16:48:42 +03:00
|
|
|
struct radeon_bo_list *reloc;
|
2009-09-01 09:25:57 +04:00
|
|
|
struct r100_cs_track *track;
|
2009-06-05 16:42:42 +04:00
|
|
|
volatile uint32_t *ib;
|
|
|
|
unsigned idx;
|
|
|
|
int r;
|
|
|
|
|
2012-05-09 17:35:02 +04:00
|
|
|
ib = p->ib.ptr;
|
2009-06-05 16:42:42 +04:00
|
|
|
idx = pkt->idx + 1;
|
2009-09-01 09:25:57 +04:00
|
|
|
track = (struct r100_cs_track *)p->track;
|
2009-06-17 15:28:30 +04:00
|
|
|
switch(pkt->opcode) {
|
2009-06-05 16:42:42 +04:00
|
|
|
case PACKET3_3D_LOAD_VBPNTR:
|
2009-09-23 10:56:27 +04:00
|
|
|
r = r100_packet3_load_vbpntr(p, pkt, idx);
|
|
|
|
if (r)
|
|
|
|
return r;
|
2009-06-05 16:42:42 +04:00
|
|
|
break;
|
|
|
|
case PACKET3_INDX_BUFFER:
|
2013-01-03 03:27:47 +04:00
|
|
|
r = radeon_cs_packet_next_reloc(p, &reloc, 0);
|
2009-06-05 16:42:42 +04:00
|
|
|
if (r) {
|
|
|
|
DRM_ERROR("No reloc for packet3 %d\n", pkt->opcode);
|
2013-01-03 03:27:45 +04:00
|
|
|
radeon_cs_dump_packet(p, pkt);
|
2009-06-05 16:42:42 +04:00
|
|
|
return r;
|
|
|
|
}
|
2014-03-03 15:38:08 +04:00
|
|
|
ib[idx+1] = radeon_get_ib_value(p, idx + 1) + ((u32)reloc->gpu_offset);
|
2009-06-17 15:28:30 +04:00
|
|
|
r = r100_cs_track_check_pkt3_indx_buffer(p, pkt, reloc->robj);
|
|
|
|
if (r) {
|
|
|
|
return r;
|
|
|
|
}
|
2009-06-05 16:42:42 +04:00
|
|
|
break;
|
|
|
|
/* Draw packet */
|
|
|
|
case PACKET3_3D_DRAW_IMMD:
|
2009-06-17 15:28:30 +04:00
|
|
|
/* Number of dwords is vtx_size * (num_vertices - 1)
|
|
|
|
* PRIM_WALK must be equal to 3 vertex data in embedded
|
|
|
|
* in cmd stream */
|
2009-09-23 10:56:27 +04:00
|
|
|
if (((radeon_get_ib_value(p, idx + 1) >> 4) & 0x3) != 3) {
|
2009-06-17 15:28:30 +04:00
|
|
|
DRM_ERROR("PRIM_WALK must be 3 for IMMD draw\n");
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
2009-09-23 10:56:27 +04:00
|
|
|
track->vap_vf_cntl = radeon_get_ib_value(p, idx + 1);
|
2009-06-17 15:28:30 +04:00
|
|
|
track->immd_dwords = pkt->count - 1;
|
2009-09-01 09:25:57 +04:00
|
|
|
r = r100_cs_track_check(p->rdev, track);
|
2009-06-17 15:28:30 +04:00
|
|
|
if (r) {
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
break;
|
2009-06-05 16:42:42 +04:00
|
|
|
case PACKET3_3D_DRAW_IMMD_2:
|
2009-06-17 15:28:30 +04:00
|
|
|
/* Number of dwords is vtx_size * (num_vertices - 1)
|
|
|
|
* PRIM_WALK must be equal to 3 vertex data in embedded
|
|
|
|
* in cmd stream */
|
2009-09-23 10:56:27 +04:00
|
|
|
if (((radeon_get_ib_value(p, idx) >> 4) & 0x3) != 3) {
|
2009-06-17 15:28:30 +04:00
|
|
|
DRM_ERROR("PRIM_WALK must be 3 for IMMD draw\n");
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
2009-09-23 10:56:27 +04:00
|
|
|
track->vap_vf_cntl = radeon_get_ib_value(p, idx);
|
2009-06-17 15:28:30 +04:00
|
|
|
track->immd_dwords = pkt->count;
|
2009-09-01 09:25:57 +04:00
|
|
|
r = r100_cs_track_check(p->rdev, track);
|
2009-06-17 15:28:30 +04:00
|
|
|
if (r) {
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
case PACKET3_3D_DRAW_VBUF:
|
2009-09-23 10:56:27 +04:00
|
|
|
track->vap_vf_cntl = radeon_get_ib_value(p, idx + 1);
|
2009-09-01 09:25:57 +04:00
|
|
|
r = r100_cs_track_check(p->rdev, track);
|
2009-06-17 15:28:30 +04:00
|
|
|
if (r) {
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
case PACKET3_3D_DRAW_VBUF_2:
|
2009-09-23 10:56:27 +04:00
|
|
|
track->vap_vf_cntl = radeon_get_ib_value(p, idx);
|
2009-09-01 09:25:57 +04:00
|
|
|
r = r100_cs_track_check(p->rdev, track);
|
2009-06-17 15:28:30 +04:00
|
|
|
if (r) {
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
case PACKET3_3D_DRAW_INDX:
|
2009-09-23 10:56:27 +04:00
|
|
|
track->vap_vf_cntl = radeon_get_ib_value(p, idx + 1);
|
2009-09-01 09:25:57 +04:00
|
|
|
r = r100_cs_track_check(p->rdev, track);
|
2009-06-17 15:28:30 +04:00
|
|
|
if (r) {
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
break;
|
2009-06-05 16:42:42 +04:00
|
|
|
case PACKET3_3D_DRAW_INDX_2:
|
2009-09-23 10:56:27 +04:00
|
|
|
track->vap_vf_cntl = radeon_get_ib_value(p, idx);
|
2009-09-01 09:25:57 +04:00
|
|
|
r = r100_cs_track_check(p->rdev, track);
|
2009-06-05 16:42:42 +04:00
|
|
|
if (r) {
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
break;
|
2010-07-13 05:11:11 +04:00
|
|
|
case PACKET3_3D_CLEAR_HIZ:
|
|
|
|
case PACKET3_3D_CLEAR_ZMASK:
|
|
|
|
if (p->rdev->hyperz_filp != p->filp)
|
|
|
|
return -EINVAL;
|
|
|
|
break;
|
2011-01-05 07:46:48 +03:00
|
|
|
case PACKET3_3D_CLEAR_CMASK:
|
|
|
|
if (p->rdev->cmask_filp != p->filp)
|
|
|
|
return -EINVAL;
|
|
|
|
break;
|
2009-06-05 16:42:42 +04:00
|
|
|
case PACKET3_NOP:
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
DRM_ERROR("Packet3 opcode %x not supported\n", pkt->opcode);
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
int r300_cs_parse(struct radeon_cs_parser *p)
|
|
|
|
{
|
|
|
|
struct radeon_cs_packet pkt;
|
2009-09-11 17:35:22 +04:00
|
|
|
struct r100_cs_track *track;
|
2009-06-05 16:42:42 +04:00
|
|
|
int r;
|
|
|
|
|
2009-09-11 17:35:22 +04:00
|
|
|
track = kzalloc(sizeof(*track), GFP_KERNEL);
|
2010-07-16 20:13:33 +04:00
|
|
|
if (track == NULL)
|
|
|
|
return -ENOMEM;
|
2009-09-11 17:35:22 +04:00
|
|
|
r100_cs_track_clear(p->rdev, track);
|
|
|
|
p->track = track;
|
2009-06-05 16:42:42 +04:00
|
|
|
do {
|
2013-01-03 03:27:41 +04:00
|
|
|
r = radeon_cs_packet_parse(p, &pkt, p->idx);
|
2009-06-05 16:42:42 +04:00
|
|
|
if (r) {
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
p->idx += pkt.count + 2;
|
|
|
|
switch (pkt.type) {
|
2013-01-03 03:27:48 +04:00
|
|
|
case RADEON_PACKET_TYPE0:
|
2009-06-05 16:42:42 +04:00
|
|
|
r = r100_cs_parse_packet0(p, &pkt,
|
2009-06-17 15:28:30 +04:00
|
|
|
p->rdev->config.r300.reg_safe_bm,
|
|
|
|
p->rdev->config.r300.reg_safe_bm_size,
|
2009-06-05 16:42:42 +04:00
|
|
|
&r300_packet0_check);
|
|
|
|
break;
|
2013-01-03 03:27:48 +04:00
|
|
|
case RADEON_PACKET_TYPE2:
|
2009-06-05 16:42:42 +04:00
|
|
|
break;
|
2013-01-03 03:27:48 +04:00
|
|
|
case RADEON_PACKET_TYPE3:
|
2009-06-05 16:42:42 +04:00
|
|
|
r = r300_packet3_check(p, &pkt);
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
DRM_ERROR("Unknown packet type %d !\n", pkt.type);
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
if (r) {
|
|
|
|
return r;
|
|
|
|
}
|
2014-12-03 17:53:24 +03:00
|
|
|
} while (p->idx < p->chunk_ib->length_dw);
|
2009-06-05 16:42:42 +04:00
|
|
|
return 0;
|
|
|
|
}
|
2009-06-17 15:28:30 +04:00
|
|
|
|
2009-09-11 17:35:22 +04:00
|
|
|
void r300_set_reg_safe(struct radeon_device *rdev)
|
2009-06-17 15:28:30 +04:00
|
|
|
{
|
|
|
|
rdev->config.r300.reg_safe_bm = r300_reg_safe_bm;
|
|
|
|
rdev->config.r300.reg_safe_bm_size = ARRAY_SIZE(r300_reg_safe_bm);
|
2009-09-11 17:35:22 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
void r300_mc_program(struct radeon_device *rdev)
|
|
|
|
{
|
|
|
|
struct r100_mc_save save;
|
|
|
|
int r;
|
|
|
|
|
|
|
|
r = r100_debugfs_mc_info_init(rdev);
|
|
|
|
if (r) {
|
|
|
|
dev_err(rdev->dev, "Failed to create r100_mc debugfs file.\n");
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Stops all mc clients */
|
|
|
|
r100_mc_stop(rdev, &save);
|
|
|
|
if (rdev->flags & RADEON_IS_AGP) {
|
|
|
|
WREG32(R_00014C_MC_AGP_LOCATION,
|
|
|
|
S_00014C_MC_AGP_START(rdev->mc.gtt_start >> 16) |
|
|
|
|
S_00014C_MC_AGP_TOP(rdev->mc.gtt_end >> 16));
|
|
|
|
WREG32(R_000170_AGP_BASE, lower_32_bits(rdev->mc.agp_base));
|
|
|
|
WREG32(R_00015C_AGP_BASE_2,
|
|
|
|
upper_32_bits(rdev->mc.agp_base) & 0xff);
|
|
|
|
} else {
|
|
|
|
WREG32(R_00014C_MC_AGP_LOCATION, 0x0FFFFFFF);
|
|
|
|
WREG32(R_000170_AGP_BASE, 0);
|
|
|
|
WREG32(R_00015C_AGP_BASE_2, 0);
|
|
|
|
}
|
|
|
|
/* Wait for mc idle */
|
|
|
|
if (r300_mc_wait_for_idle(rdev))
|
|
|
|
DRM_INFO("Failed to wait MC idle before programming MC.\n");
|
|
|
|
/* Program MC, should be a 32bits limited address space */
|
|
|
|
WREG32(R_000148_MC_FB_LOCATION,
|
|
|
|
S_000148_MC_FB_START(rdev->mc.vram_start >> 16) |
|
|
|
|
S_000148_MC_FB_TOP(rdev->mc.vram_end >> 16));
|
|
|
|
r100_mc_resume(rdev, &save);
|
|
|
|
}
|
2009-10-01 12:20:52 +04:00
|
|
|
|
|
|
|
void r300_clock_startup(struct radeon_device *rdev)
|
|
|
|
{
|
|
|
|
u32 tmp;
|
|
|
|
|
|
|
|
if (radeon_dynclks != -1 && radeon_dynclks)
|
|
|
|
radeon_legacy_set_clock_gating(rdev, 1);
|
|
|
|
/* We need to force on some of the block */
|
|
|
|
tmp = RREG32_PLL(R_00000D_SCLK_CNTL);
|
|
|
|
tmp |= S_00000D_FORCE_CP(1) | S_00000D_FORCE_VIP(1);
|
|
|
|
if ((rdev->family == CHIP_RV350) || (rdev->family == CHIP_RV380))
|
|
|
|
tmp |= S_00000D_FORCE_VAP(1);
|
|
|
|
WREG32_PLL(R_00000D_SCLK_CNTL, tmp);
|
|
|
|
}
|
2009-09-30 17:35:32 +04:00
|
|
|
|
|
|
|
static int r300_startup(struct radeon_device *rdev)
|
|
|
|
{
|
|
|
|
int r;
|
|
|
|
|
2009-12-04 18:55:12 +03:00
|
|
|
/* set common regs */
|
|
|
|
r100_set_common_regs(rdev);
|
|
|
|
/* program mc */
|
2009-09-30 17:35:32 +04:00
|
|
|
r300_mc_program(rdev);
|
|
|
|
/* Resume clock */
|
|
|
|
r300_clock_startup(rdev);
|
|
|
|
/* Initialize GPU configuration (# pipes, ...) */
|
|
|
|
r300_gpu_init(rdev);
|
|
|
|
/* Initialize GART (initialize after TTM so we can allocate
|
|
|
|
* memory through TTM but finalize after TTM) */
|
|
|
|
if (rdev->flags & RADEON_IS_PCIE) {
|
|
|
|
r = rv370_pcie_gart_enable(rdev);
|
|
|
|
if (r)
|
|
|
|
return r;
|
|
|
|
}
|
2009-11-05 08:36:53 +03:00
|
|
|
|
|
|
|
if (rdev->family == CHIP_R300 ||
|
|
|
|
rdev->family == CHIP_R350 ||
|
|
|
|
rdev->family == CHIP_RV350)
|
|
|
|
r100_enable_bm(rdev);
|
|
|
|
|
2009-09-30 17:35:32 +04:00
|
|
|
if (rdev->flags & RADEON_IS_PCI) {
|
|
|
|
r = r100_pci_gart_enable(rdev);
|
|
|
|
if (r)
|
|
|
|
return r;
|
|
|
|
}
|
2010-08-28 02:25:25 +04:00
|
|
|
|
|
|
|
/* allocate wb buffer */
|
|
|
|
r = radeon_wb_init(rdev);
|
|
|
|
if (r)
|
|
|
|
return r;
|
|
|
|
|
2011-11-21 00:45:34 +04:00
|
|
|
r = radeon_fence_driver_start_ring(rdev, RADEON_RING_TYPE_GFX_INDEX);
|
|
|
|
if (r) {
|
|
|
|
dev_err(rdev->dev, "failed initializing CP fences (%d).\n", r);
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
|
2009-09-30 17:35:32 +04:00
|
|
|
/* Enable IRQ */
|
radeon: Fix system hang issue when using KMS with older cards
The current radeon driver initialization routines, when using KMS, are written
so that the IRQ installation routine is called before initializing the WB buffer
and the CP rings. With some ASICs, though, the IRQ routine tries to access the
GFX_INDEX ring causing a call to RREG32 with the value of -1 in
radeon_fence_read. This, in turn causes the system to completely hang with some
cards, requiring a hard reset.
A call stack that can cause such a hang looks like this (using rv515 ASIC for the
example here):
* rv515_init (rv515.c)
* radeon_irq_kms_init (radeon_irq_kms.c)
* drm_irq_install (drm_irq.c)
* radeon_driver_irq_preinstall_kms (radeon_irq_kms.c)
* rs600_irq_process (rs600.c)
* radeon_fence_process - due to SW interrupt (radeon_fence.c)
* radeon_fence_read (radeon_fence.c)
* hang due to RREG32(-1)
The patch moves the IRQ installation to the card startup routine, after the ring
has been initialized, but before the IRQ has been set. This fixes the issue, but
requires a check to see if the IRQ is already installed, as is the case in the
system resume codepath.
I have tested the patch on three machines using the rv515, the rv770 and the
evergreen ASIC. They worked without issues.
This seems to be a known issue and has been reported on several bug tracking
sites by various distributions (see links below). Most of reports recommend
booting the system with KMS disabled and then enabling KMS by reloading the
radeon module. For some reason, this was indeed a usable workaround, however,
UMS is now deprecated and disabled by default.
Bug reports:
https://bugzilla.redhat.com/show_bug.cgi?id=845745
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/561789
https://bbs.archlinux.org/viewtopic.php?id=156964
Signed-off-by: Adis Hamzić <adis@hamzadis.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: stable@vger.kernel.org
2013-06-02 18:47:54 +04:00
|
|
|
if (!rdev->irq.installed) {
|
|
|
|
r = radeon_irq_kms_init(rdev);
|
|
|
|
if (r)
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
|
2009-09-30 17:35:32 +04:00
|
|
|
r100_irq_set(rdev);
|
2010-01-07 14:39:21 +03:00
|
|
|
rdev->config.r300.hdp_cntl = RREG32(RADEON_HOST_PATH_CNTL);
|
2009-09-30 17:35:32 +04:00
|
|
|
/* 1M ring buffer */
|
|
|
|
r = r100_cp_init(rdev, 1024 * 1024);
|
|
|
|
if (r) {
|
2011-01-29 01:32:04 +03:00
|
|
|
dev_err(rdev->dev, "failed initializing CP (%d).\n", r);
|
2009-09-30 17:35:32 +04:00
|
|
|
return r;
|
|
|
|
}
|
drm/radeon: introduce a sub allocator and convert ib pool to it v4
Somewhat specializaed sub-allocator designed to perform sub-allocation
for command buffer not only for current cs ioctl but for future command
submission ioctl as well. Patch also convert current ib pool to use
the sub allocator. Idea is that ib poll buffer can be share with other
command buffer submission not having 64K granularity.
v2 Harmonize pool handling and add suspend/resume callback to pin/unpin
sa bo (tested on rv280, rv370, r420, rv515, rv610, rv710, redwood, cayman,
rs480, rs690, rs880)
v3 Simplify allocator
v4 Fix radeon_ib_get error path to properly free fence
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2011-11-15 20:48:34 +04:00
|
|
|
|
2012-07-05 13:55:34 +04:00
|
|
|
r = radeon_ib_pool_init(rdev);
|
|
|
|
if (r) {
|
|
|
|
dev_err(rdev->dev, "IB initialization failed (%d).\n", r);
|
drm/radeon: introduce a sub allocator and convert ib pool to it v4
Somewhat specializaed sub-allocator designed to perform sub-allocation
for command buffer not only for current cs ioctl but for future command
submission ioctl as well. Patch also convert current ib pool to use
the sub allocator. Idea is that ib poll buffer can be share with other
command buffer submission not having 64K granularity.
v2 Harmonize pool handling and add suspend/resume callback to pin/unpin
sa bo (tested on rv280, rv370, r420, rv515, rv610, rv710, redwood, cayman,
rs480, rs690, rs880)
v3 Simplify allocator
v4 Fix radeon_ib_get error path to properly free fence
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2011-11-15 20:48:34 +04:00
|
|
|
return r;
|
2012-07-05 13:55:34 +04:00
|
|
|
}
|
drm/radeon: introduce a sub allocator and convert ib pool to it v4
Somewhat specializaed sub-allocator designed to perform sub-allocation
for command buffer not only for current cs ioctl but for future command
submission ioctl as well. Patch also convert current ib pool to use
the sub allocator. Idea is that ib poll buffer can be share with other
command buffer submission not having 64K granularity.
v2 Harmonize pool handling and add suspend/resume callback to pin/unpin
sa bo (tested on rv280, rv370, r420, rv515, rv610, rv710, redwood, cayman,
rs480, rs690, rs880)
v3 Simplify allocator
v4 Fix radeon_ib_get error path to properly free fence
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2011-11-15 20:48:34 +04:00
|
|
|
|
2009-09-30 17:35:32 +04:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
int r300_resume(struct radeon_device *rdev)
|
|
|
|
{
|
2012-02-21 02:57:20 +04:00
|
|
|
int r;
|
|
|
|
|
2009-09-30 17:35:32 +04:00
|
|
|
/* Make sur GART are not working */
|
|
|
|
if (rdev->flags & RADEON_IS_PCIE)
|
|
|
|
rv370_pcie_gart_disable(rdev);
|
|
|
|
if (rdev->flags & RADEON_IS_PCI)
|
|
|
|
r100_pci_gart_disable(rdev);
|
|
|
|
/* Resume clock before doing reset */
|
|
|
|
r300_clock_startup(rdev);
|
|
|
|
/* Reset gpu before posting otherwise ATOM will enter infinite loop */
|
2010-03-09 17:45:11 +03:00
|
|
|
if (radeon_asic_reset(rdev)) {
|
2009-09-30 17:35:32 +04:00
|
|
|
dev_warn(rdev->dev, "GPU reset failed ! (0xE40=0x%08X, 0x7C0=0x%08X)\n",
|
|
|
|
RREG32(R_000E40_RBBM_STATUS),
|
|
|
|
RREG32(R_0007C0_CP_STAT));
|
|
|
|
}
|
|
|
|
/* post */
|
|
|
|
radeon_combios_asic_init(rdev->ddev);
|
|
|
|
/* Resume clock after posting */
|
|
|
|
r300_clock_startup(rdev);
|
2009-12-09 07:15:38 +03:00
|
|
|
/* Initialize surface registers */
|
|
|
|
radeon_surface_init(rdev);
|
drm/radeon: introduce a sub allocator and convert ib pool to it v4
Somewhat specializaed sub-allocator designed to perform sub-allocation
for command buffer not only for current cs ioctl but for future command
submission ioctl as well. Patch also convert current ib pool to use
the sub allocator. Idea is that ib poll buffer can be share with other
command buffer submission not having 64K granularity.
v2 Harmonize pool handling and add suspend/resume callback to pin/unpin
sa bo (tested on rv280, rv370, r420, rv515, rv610, rv710, redwood, cayman,
rs480, rs690, rs880)
v3 Simplify allocator
v4 Fix radeon_ib_get error path to properly free fence
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2011-11-15 20:48:34 +04:00
|
|
|
|
|
|
|
rdev->accel_working = true;
|
2012-02-21 02:57:20 +04:00
|
|
|
r = r300_startup(rdev);
|
|
|
|
if (r) {
|
|
|
|
rdev->accel_working = false;
|
|
|
|
}
|
|
|
|
return r;
|
2009-09-30 17:35:32 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
int r300_suspend(struct radeon_device *rdev)
|
|
|
|
{
|
2013-12-18 23:07:14 +04:00
|
|
|
radeon_pm_suspend(rdev);
|
2009-09-30 17:35:32 +04:00
|
|
|
r100_cp_disable(rdev);
|
2010-08-28 02:25:25 +04:00
|
|
|
radeon_wb_disable(rdev);
|
2009-09-30 17:35:32 +04:00
|
|
|
r100_irq_disable(rdev);
|
|
|
|
if (rdev->flags & RADEON_IS_PCIE)
|
|
|
|
rv370_pcie_gart_disable(rdev);
|
|
|
|
if (rdev->flags & RADEON_IS_PCI)
|
|
|
|
r100_pci_gart_disable(rdev);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
void r300_fini(struct radeon_device *rdev)
|
|
|
|
{
|
2013-12-18 23:07:14 +04:00
|
|
|
radeon_pm_fini(rdev);
|
2009-09-30 17:35:32 +04:00
|
|
|
r100_cp_fini(rdev);
|
2010-08-28 02:25:25 +04:00
|
|
|
radeon_wb_fini(rdev);
|
2012-07-05 13:55:34 +04:00
|
|
|
radeon_ib_pool_fini(rdev);
|
2009-09-30 17:35:32 +04:00
|
|
|
radeon_gem_fini(rdev);
|
|
|
|
if (rdev->flags & RADEON_IS_PCIE)
|
|
|
|
rv370_pcie_gart_fini(rdev);
|
|
|
|
if (rdev->flags & RADEON_IS_PCI)
|
|
|
|
r100_pci_gart_fini(rdev);
|
2010-01-07 18:08:32 +03:00
|
|
|
radeon_agp_fini(rdev);
|
2009-09-30 17:35:32 +04:00
|
|
|
radeon_irq_kms_fini(rdev);
|
|
|
|
radeon_fence_driver_fini(rdev);
|
2009-11-20 16:29:23 +03:00
|
|
|
radeon_bo_fini(rdev);
|
2009-09-30 17:35:32 +04:00
|
|
|
radeon_atombios_fini(rdev);
|
|
|
|
kfree(rdev->bios);
|
|
|
|
rdev->bios = NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
int r300_init(struct radeon_device *rdev)
|
|
|
|
{
|
|
|
|
int r;
|
|
|
|
|
|
|
|
/* Disable VGA */
|
|
|
|
r100_vga_render_disable(rdev);
|
|
|
|
/* Initialize scratch registers */
|
|
|
|
radeon_scratch_init(rdev);
|
|
|
|
/* Initialize surface registers */
|
|
|
|
radeon_surface_init(rdev);
|
|
|
|
/* TODO: disable VGA need to use VGA request */
|
2010-07-15 06:13:50 +04:00
|
|
|
/* restore some register to sane defaults */
|
|
|
|
r100_restore_sanity(rdev);
|
2009-09-30 17:35:32 +04:00
|
|
|
/* BIOS*/
|
|
|
|
if (!radeon_get_bios(rdev)) {
|
|
|
|
if (ASIC_IS_AVIVO(rdev))
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
if (rdev->is_atom_bios) {
|
|
|
|
dev_err(rdev->dev, "Expecting combios for RS400/RS480 GPU\n");
|
|
|
|
return -EINVAL;
|
|
|
|
} else {
|
|
|
|
r = radeon_combios_init(rdev);
|
|
|
|
if (r)
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
/* Reset gpu before posting otherwise ATOM will enter infinite loop */
|
2010-03-09 17:45:11 +03:00
|
|
|
if (radeon_asic_reset(rdev)) {
|
2009-09-30 17:35:32 +04:00
|
|
|
dev_warn(rdev->dev,
|
|
|
|
"GPU reset failed ! (0xE40=0x%08X, 0x7C0=0x%08X)\n",
|
|
|
|
RREG32(R_000E40_RBBM_STATUS),
|
|
|
|
RREG32(R_0007C0_CP_STAT));
|
|
|
|
}
|
|
|
|
/* check if cards are posted or not */
|
2009-12-01 07:06:31 +03:00
|
|
|
if (radeon_boot_test_post_card(rdev) == false)
|
|
|
|
return -EINVAL;
|
2009-09-30 17:35:32 +04:00
|
|
|
/* Set asic errata */
|
|
|
|
r300_errata(rdev);
|
|
|
|
/* Initialize clocks */
|
|
|
|
radeon_get_clock_info(rdev->ddev);
|
drm/radeon/kms: simplify memory controller setup V2
Get rid of _location and use _start/_end also simplify the
computation of vram_start|end & gtt_start|end. For R1XX-R2XX
we place VRAM at the same address of PCI aperture, those GPU
shouldn't have much memory and seems to behave better when
setup that way. For R3XX and newer we place VRAM at 0. For
R6XX-R7XX AGP we place VRAM before or after AGP aperture this
might limit to limit the VRAM size but it's very unlikely.
For IGP we don't change the VRAM placement.
Tested on (compiz,quake3,suspend/resume):
PCI/PCIE:RV280,R420,RV515,RV570,RV610,RV710
AGP:RV100,RV280,R420,RV350,RV620(RPB*),RV730
IGP:RS480(RPB*),RS690,RS780(RPB*),RS880
RPB: resume previously broken
V2 correct commit message to reflect more accurately the bug
and move VRAM placement to 0 for most of the GPU to avoid
limiting VRAM.
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2010-02-18 00:54:29 +03:00
|
|
|
/* initialize AGP */
|
|
|
|
if (rdev->flags & RADEON_IS_AGP) {
|
|
|
|
r = radeon_agp_init(rdev);
|
|
|
|
if (r) {
|
|
|
|
radeon_agp_disable(rdev);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
/* initialize memory controller */
|
|
|
|
r300_mc_init(rdev);
|
2009-09-30 17:35:32 +04:00
|
|
|
/* Fence driver */
|
2011-11-21 00:45:34 +04:00
|
|
|
r = radeon_fence_driver_init(rdev);
|
2009-09-30 17:35:32 +04:00
|
|
|
if (r)
|
|
|
|
return r;
|
|
|
|
/* Memory manager */
|
2009-11-20 16:29:23 +03:00
|
|
|
r = radeon_bo_init(rdev);
|
2009-09-30 17:35:32 +04:00
|
|
|
if (r)
|
|
|
|
return r;
|
|
|
|
if (rdev->flags & RADEON_IS_PCIE) {
|
|
|
|
r = rv370_pcie_gart_init(rdev);
|
|
|
|
if (r)
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
if (rdev->flags & RADEON_IS_PCI) {
|
|
|
|
r = r100_pci_gart_init(rdev);
|
|
|
|
if (r)
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
r300_set_reg_safe(rdev);
|
drm/radeon: introduce a sub allocator and convert ib pool to it v4
Somewhat specializaed sub-allocator designed to perform sub-allocation
for command buffer not only for current cs ioctl but for future command
submission ioctl as well. Patch also convert current ib pool to use
the sub allocator. Idea is that ib poll buffer can be share with other
command buffer submission not having 64K granularity.
v2 Harmonize pool handling and add suspend/resume callback to pin/unpin
sa bo (tested on rv280, rv370, r420, rv515, rv610, rv710, redwood, cayman,
rs480, rs690, rs880)
v3 Simplify allocator
v4 Fix radeon_ib_get error path to properly free fence
Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
2011-11-15 20:48:34 +04:00
|
|
|
|
2013-12-18 23:07:14 +04:00
|
|
|
/* Initialize power management */
|
|
|
|
radeon_pm_init(rdev);
|
|
|
|
|
2009-09-30 17:35:32 +04:00
|
|
|
rdev->accel_working = true;
|
|
|
|
r = r300_startup(rdev);
|
|
|
|
if (r) {
|
2012-04-30 21:21:51 +04:00
|
|
|
/* Something went wrong with the accel init, so stop accel */
|
2009-09-30 17:35:32 +04:00
|
|
|
dev_err(rdev->dev, "Disabling GPU acceleration\n");
|
|
|
|
r100_cp_fini(rdev);
|
2010-08-28 02:25:25 +04:00
|
|
|
radeon_wb_fini(rdev);
|
2012-07-05 13:55:34 +04:00
|
|
|
radeon_ib_pool_fini(rdev);
|
2010-02-02 13:51:45 +03:00
|
|
|
radeon_irq_kms_fini(rdev);
|
2009-09-30 17:35:32 +04:00
|
|
|
if (rdev->flags & RADEON_IS_PCIE)
|
|
|
|
rv370_pcie_gart_fini(rdev);
|
|
|
|
if (rdev->flags & RADEON_IS_PCI)
|
|
|
|
r100_pci_gart_fini(rdev);
|
2010-02-02 13:51:45 +03:00
|
|
|
radeon_agp_fini(rdev);
|
2009-09-30 17:35:32 +04:00
|
|
|
rdev->accel_working = false;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|