drm/radeon/nouveau: fix build regression on alpha due to Xen changes.

The Xen changes were using DMA_ERROR_CODE which isn't defined on a few
platforms, however we reverted the Xen patch that caused use to try and
use this code path earlier in 2.6.39 cycle, so for now lets just force
the code to never take this path and allow it to build again on alpha.

The proper long term answer is probably to store if the dma_addr has
been assigned to alongside the dma_addr in the higher level code,
though I think Thomas wanted to rewrite most of this anyways properly.

Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
This commit is contained in:
Dave Airlie 2011-05-09 02:24:04 +00:00
Родитель 1f03128251
Коммит 03a8066534
2 изменённых файлов: 5 добавлений и 4 удалений

Просмотреть файл

@ -42,7 +42,8 @@ nouveau_sgdma_populate(struct ttm_backend *be, unsigned long num_pages,
nvbe->nr_pages = 0;
while (num_pages--) {
if (dma_addrs[nvbe->nr_pages] != DMA_ERROR_CODE) {
/* this code path isn't called and is incorrect anyways */
if (0) { /*dma_addrs[nvbe->nr_pages] != DMA_ERROR_CODE)*/
nvbe->pages[nvbe->nr_pages] =
dma_addrs[nvbe->nr_pages];
nvbe->ttm_alloced[nvbe->nr_pages] = true;

Просмотреть файл

@ -181,9 +181,9 @@ int radeon_gart_bind(struct radeon_device *rdev, unsigned offset,
p = t / (PAGE_SIZE / RADEON_GPU_PAGE_SIZE);
for (i = 0; i < pages; i++, p++) {
/* On TTM path, we only use the DMA API if TTM_PAGE_FLAG_DMA32
* is requested. */
if (dma_addr[i] != DMA_ERROR_CODE) {
/* we reverted the patch using dma_addr in TTM for now but this
* code stops building on alpha so just comment it out for now */
if (0) { /*dma_addr[i] != DMA_ERROR_CODE) */
rdev->gart.ttm_alloced[p] = true;
rdev->gart.pages_addr[p] = dma_addr[i];
} else {