drm fixes for 6.2-rc4
buddy: - benchmark regression fix for top-down buddy allocation panel: - add Lenovo panel orientation quirk ttm: - fix kernel oops regression amdgpu: - fix missing fence references - fix missing pipeline sync fencing - SMU13 fan speed fix - SMU13 fix power cap handling - SMU13 BACO fix - Fix a possible segfault in bo validation error case - Delay removal of firmware framebuffer - Fix error when unloading amdkfd: - SVM fix when clearing vram - GC11 fix for multi-GPU i915: - Reserve enough fence slot for i915_vma_unbind_vsync - Fix potential use after free - Reset engines twice in case of reset failure - Use multi-cast registers for SVG Unit registers msm: - display: - doc warning fixes - dt attribs cleanups - memory leak fix - error handing in hdmi probe fix - dp_aux_isr incorrect signalling fix - shutdown path fix - accel: - a5xx: fix quirks to be a bitmask - a6xx: fix gx halt to avoid 1s hang - kexec shutdown fix - fix potential double free vmwgfx: - drop rcu usage to make code more robust virtio: - fix use-after-free in gem handle code nouveau: - drop unused nouveau_fbcon.c -----BEGIN PGP SIGNATURE----- iQIzBAABCAAdFiEEEKbZHaGwW9KfbeusDHTzWXnEhr4FAmPA5skACgkQDHTzWXnE hr69GA/8CgYwN/4vWDiQ+p4rX6muw0gicxKmfJZLxt8BFnQSSjESWZ6/L201JeZT dl34SdZ++rx8yYLeoJKHKocyvMIj9goHeGdNkeyhaoZvlfPVf0sAMNjYfysIvcGk s4HoLoXgzH8SCO2dp2MgOstJNncZNSZrH13b3UkgqQuB+VrL+pGx3qJM7z9Khe1j vtgpBStgFIlkvDYHuTJDsn0X4E543EBs54U4g/Jc3WcnQwtRycUhXmkFDOtwkM/d bwghisms6P+OvSAMdU2JWDwYLe/87zeXklKZqJzWpbrcB1iZPF/L2B40CuSXidj+ cXjNbWlwm0Yn6ytHMXp7+3bV6VTvDFmI+1uabXVH0wn8UIxX9WoKJeW7JgYnveXU FG4Un/PhILeRxZa3jRNJVhPPq4JWjoINJvVmwSMMZdKT9x5MvdfHy+gsCpP6Ojjy ++MjslROZE0ciYPmwG2WPsmwylV00aztdIcNHzXZp4tX79hGw6cOfFjH9rfUUqJv W52WVQnJ+JHv3BgFCyqXReUdSkXT39J3c54L1E9rK+OpVvc1i2gEN+eTVoNp1Vwn 4Gyb8MPKj//NxaUNwpEDfBRp6scd553xz5K+0SPBNB+G/XnRxoPT8jz0/ivpYGDd WB73KZbvxRz2vzyy+biuEctyVTDlKDNM3UADW83eFspxHNthX28= =OCT2 -----END PGP SIGNATURE----- Merge tag 'drm-fixes-2023-01-13' of git://anongit.freedesktop.org/drm/drm Pull drm fixes from Dave Airlie: "There is a bit of a post-holiday build up here I expect, small fixes across the board, amdgpu and msm being the main leaders, with others having a few. One code removal patch for nouveau: buddy: - benchmark regression fix for top-down buddy allocation panel: - add Lenovo panel orientation quirk ttm: - fix kernel oops regression amdgpu: - fix missing fence references - fix missing pipeline sync fencing - SMU13 fan speed fix - SMU13 fix power cap handling - SMU13 BACO fix - Fix a possible segfault in bo validation error case - Delay removal of firmware framebuffer - Fix error when unloading amdkfd: - SVM fix when clearing vram - GC11 fix for multi-GPU i915: - Reserve enough fence slot for i915_vma_unbind_vsync - Fix potential use after free - Reset engines twice in case of reset failure - Use multi-cast registers for SVG Unit registers msm: - display: - doc warning fixes - dt attribs cleanups - memory leak fix - error handing in hdmi probe fix - dp_aux_isr incorrect signalling fix - shutdown path fix - accel: - a5xx: fix quirks to be a bitmask - a6xx: fix gx halt to avoid 1s hang - kexec shutdown fix - fix potential double free vmwgfx: - drop rcu usage to make code more robust virtio: - fix use-after-free in gem handle code nouveau: - drop unused nouveau_fbcon.c" * tag 'drm-fixes-2023-01-13' of git://anongit.freedesktop.org/drm/drm: (35 commits) drm: Optimize drm buddy top-down allocation method drm/ttm: Fix a regression causing kernel oops'es drm/i915/gt: Cover rest of SVG unit MCR registers drm/nouveau: Remove file nouveau_fbcon.c drm/amdkfd: Fix NULL pointer error for GC 11.0.1 on mGPU drm/amd/pm/smu13: BACO is supported when it's in BACO state drm/amdkfd: Add sync after creating vram bo drm/i915/gt: Reset twice drm/amdgpu: fix pipeline sync v2 drm/vmwgfx: Remove rcu locks from user resources drm/virtio: Fix GEM handle creation UAF drm/amdgpu: Fixed bug on error when unloading amdgpu drm/amd: Delay removal of the firmware framebuffer drm/amdgpu: Fix potential NULL dereference drm/i915: Fix potential context UAFs drm/i915: Reserve enough fence slot for i915_vma_unbind_async drm: Add orientation quirk for Lenovo ideapad D330-10IGL drm/msm/a6xx: Avoid gx gbit halt during rpm suspend drm/msm/adreno: Make adreno quirks not overwrite each other drm/msm: another fix for the headless Adreno GPU ...
This commit is contained in:
Коммит
ff5ebafd51
|
@ -32,7 +32,7 @@ properties:
|
||||||
- description: Display byte clock
|
- description: Display byte clock
|
||||||
- description: Display byte interface clock
|
- description: Display byte interface clock
|
||||||
- description: Display pixel clock
|
- description: Display pixel clock
|
||||||
- description: Display escape clock
|
- description: Display core clock
|
||||||
- description: Display AHB clock
|
- description: Display AHB clock
|
||||||
- description: Display AXI clock
|
- description: Display AXI clock
|
||||||
|
|
||||||
|
@ -137,8 +137,6 @@ required:
|
||||||
- phys
|
- phys
|
||||||
- assigned-clocks
|
- assigned-clocks
|
||||||
- assigned-clock-parents
|
- assigned-clock-parents
|
||||||
- power-domains
|
|
||||||
- operating-points-v2
|
|
||||||
- ports
|
- ports
|
||||||
|
|
||||||
additionalProperties: false
|
additionalProperties: false
|
||||||
|
|
|
@ -69,7 +69,6 @@ required:
|
||||||
- compatible
|
- compatible
|
||||||
- reg
|
- reg
|
||||||
- reg-names
|
- reg-names
|
||||||
- vdds-supply
|
|
||||||
|
|
||||||
unevaluatedProperties: false
|
unevaluatedProperties: false
|
||||||
|
|
||||||
|
|
|
@ -39,7 +39,6 @@ required:
|
||||||
- compatible
|
- compatible
|
||||||
- reg
|
- reg
|
||||||
- reg-names
|
- reg-names
|
||||||
- vcca-supply
|
|
||||||
|
|
||||||
unevaluatedProperties: false
|
unevaluatedProperties: false
|
||||||
|
|
||||||
|
|
|
@ -34,6 +34,10 @@ properties:
|
||||||
vddio-supply:
|
vddio-supply:
|
||||||
description: Phandle to vdd-io regulator device node.
|
description: Phandle to vdd-io regulator device node.
|
||||||
|
|
||||||
|
qcom,dsi-phy-regulator-ldo-mode:
|
||||||
|
type: boolean
|
||||||
|
description: Indicates if the LDO mode PHY regulator is wanted.
|
||||||
|
|
||||||
required:
|
required:
|
||||||
- compatible
|
- compatible
|
||||||
- reg
|
- reg
|
||||||
|
|
|
@ -72,7 +72,7 @@ examples:
|
||||||
#include <dt-bindings/interconnect/qcom,qcm2290.h>
|
#include <dt-bindings/interconnect/qcom,qcm2290.h>
|
||||||
#include <dt-bindings/power/qcom-rpmpd.h>
|
#include <dt-bindings/power/qcom-rpmpd.h>
|
||||||
|
|
||||||
mdss@5e00000 {
|
display-subsystem@5e00000 {
|
||||||
#address-cells = <1>;
|
#address-cells = <1>;
|
||||||
#size-cells = <1>;
|
#size-cells = <1>;
|
||||||
compatible = "qcom,qcm2290-mdss";
|
compatible = "qcom,qcm2290-mdss";
|
||||||
|
|
|
@ -62,7 +62,7 @@ examples:
|
||||||
#include <dt-bindings/interrupt-controller/arm-gic.h>
|
#include <dt-bindings/interrupt-controller/arm-gic.h>
|
||||||
#include <dt-bindings/power/qcom-rpmpd.h>
|
#include <dt-bindings/power/qcom-rpmpd.h>
|
||||||
|
|
||||||
mdss@5e00000 {
|
display-subsystem@5e00000 {
|
||||||
#address-cells = <1>;
|
#address-cells = <1>;
|
||||||
#size-cells = <1>;
|
#size-cells = <1>;
|
||||||
compatible = "qcom,sm6115-mdss";
|
compatible = "qcom,sm6115-mdss";
|
||||||
|
|
|
@ -2099,7 +2099,7 @@ int amdgpu_amdkfd_map_gtt_bo_to_gart(struct amdgpu_device *adev, struct amdgpu_b
|
||||||
}
|
}
|
||||||
|
|
||||||
amdgpu_amdkfd_remove_eviction_fence(
|
amdgpu_amdkfd_remove_eviction_fence(
|
||||||
bo, bo->kfd_bo->process_info->eviction_fence);
|
bo, bo->vm_bo->vm->process_info->eviction_fence);
|
||||||
|
|
||||||
amdgpu_bo_unreserve(bo);
|
amdgpu_bo_unreserve(bo);
|
||||||
|
|
||||||
|
|
|
@ -61,6 +61,8 @@ static int amdgpu_cs_parser_init(struct amdgpu_cs_parser *p,
|
||||||
amdgpu_ctx_put(p->ctx);
|
amdgpu_ctx_put(p->ctx);
|
||||||
return -ECANCELED;
|
return -ECANCELED;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
amdgpu_sync_create(&p->sync);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -452,18 +454,6 @@ static int amdgpu_syncobj_lookup_and_add(struct amdgpu_cs_parser *p,
|
||||||
}
|
}
|
||||||
|
|
||||||
r = amdgpu_sync_fence(&p->sync, fence);
|
r = amdgpu_sync_fence(&p->sync, fence);
|
||||||
if (r)
|
|
||||||
goto error;
|
|
||||||
|
|
||||||
/*
|
|
||||||
* When we have an explicit dependency it might be necessary to insert a
|
|
||||||
* pipeline sync to make sure that all caches etc are flushed and the
|
|
||||||
* next job actually sees the results from the previous one.
|
|
||||||
*/
|
|
||||||
if (fence->context == p->gang_leader->base.entity->fence_context)
|
|
||||||
r = amdgpu_sync_fence(&p->gang_leader->explicit_sync, fence);
|
|
||||||
|
|
||||||
error:
|
|
||||||
dma_fence_put(fence);
|
dma_fence_put(fence);
|
||||||
return r;
|
return r;
|
||||||
}
|
}
|
||||||
|
@ -1188,10 +1178,19 @@ static int amdgpu_cs_vm_handling(struct amdgpu_cs_parser *p)
|
||||||
static int amdgpu_cs_sync_rings(struct amdgpu_cs_parser *p)
|
static int amdgpu_cs_sync_rings(struct amdgpu_cs_parser *p)
|
||||||
{
|
{
|
||||||
struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
|
struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
|
||||||
|
struct drm_gpu_scheduler *sched;
|
||||||
struct amdgpu_bo_list_entry *e;
|
struct amdgpu_bo_list_entry *e;
|
||||||
|
struct dma_fence *fence;
|
||||||
unsigned int i;
|
unsigned int i;
|
||||||
int r;
|
int r;
|
||||||
|
|
||||||
|
r = amdgpu_ctx_wait_prev_fence(p->ctx, p->entities[p->gang_leader_idx]);
|
||||||
|
if (r) {
|
||||||
|
if (r != -ERESTARTSYS)
|
||||||
|
DRM_ERROR("amdgpu_ctx_wait_prev_fence failed.\n");
|
||||||
|
return r;
|
||||||
|
}
|
||||||
|
|
||||||
list_for_each_entry(e, &p->validated, tv.head) {
|
list_for_each_entry(e, &p->validated, tv.head) {
|
||||||
struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo);
|
struct amdgpu_bo *bo = ttm_to_amdgpu_bo(e->tv.bo);
|
||||||
struct dma_resv *resv = bo->tbo.base.resv;
|
struct dma_resv *resv = bo->tbo.base.resv;
|
||||||
|
@ -1211,10 +1210,24 @@ static int amdgpu_cs_sync_rings(struct amdgpu_cs_parser *p)
|
||||||
return r;
|
return r;
|
||||||
}
|
}
|
||||||
|
|
||||||
r = amdgpu_ctx_wait_prev_fence(p->ctx, p->entities[p->gang_leader_idx]);
|
sched = p->gang_leader->base.entity->rq->sched;
|
||||||
if (r && r != -ERESTARTSYS)
|
while ((fence = amdgpu_sync_get_fence(&p->sync))) {
|
||||||
DRM_ERROR("amdgpu_ctx_wait_prev_fence failed.\n");
|
struct drm_sched_fence *s_fence = to_drm_sched_fence(fence);
|
||||||
return r;
|
|
||||||
|
/*
|
||||||
|
* When we have an dependency it might be necessary to insert a
|
||||||
|
* pipeline sync to make sure that all caches etc are flushed and the
|
||||||
|
* next job actually sees the results from the previous one
|
||||||
|
* before we start executing on the same scheduler ring.
|
||||||
|
*/
|
||||||
|
if (!s_fence || s_fence->sched != sched)
|
||||||
|
continue;
|
||||||
|
|
||||||
|
r = amdgpu_sync_fence(&p->gang_leader->explicit_sync, fence);
|
||||||
|
if (r)
|
||||||
|
return r;
|
||||||
|
}
|
||||||
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void amdgpu_cs_post_dependencies(struct amdgpu_cs_parser *p)
|
static void amdgpu_cs_post_dependencies(struct amdgpu_cs_parser *p)
|
||||||
|
@ -1254,9 +1267,12 @@ static int amdgpu_cs_submit(struct amdgpu_cs_parser *p,
|
||||||
continue;
|
continue;
|
||||||
|
|
||||||
fence = &p->jobs[i]->base.s_fence->scheduled;
|
fence = &p->jobs[i]->base.s_fence->scheduled;
|
||||||
|
dma_fence_get(fence);
|
||||||
r = drm_sched_job_add_dependency(&leader->base, fence);
|
r = drm_sched_job_add_dependency(&leader->base, fence);
|
||||||
if (r)
|
if (r) {
|
||||||
|
dma_fence_put(fence);
|
||||||
goto error_cleanup;
|
goto error_cleanup;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if (p->gang_size > 1) {
|
if (p->gang_size > 1) {
|
||||||
|
@ -1344,6 +1360,7 @@ static void amdgpu_cs_parser_fini(struct amdgpu_cs_parser *parser)
|
||||||
{
|
{
|
||||||
unsigned i;
|
unsigned i;
|
||||||
|
|
||||||
|
amdgpu_sync_free(&parser->sync);
|
||||||
for (i = 0; i < parser->num_post_deps; i++) {
|
for (i = 0; i < parser->num_post_deps; i++) {
|
||||||
drm_syncobj_put(parser->post_deps[i].syncobj);
|
drm_syncobj_put(parser->post_deps[i].syncobj);
|
||||||
kfree(parser->post_deps[i].chain);
|
kfree(parser->post_deps[i].chain);
|
||||||
|
|
|
@ -36,6 +36,7 @@
|
||||||
#include <generated/utsrelease.h>
|
#include <generated/utsrelease.h>
|
||||||
#include <linux/pci-p2pdma.h>
|
#include <linux/pci-p2pdma.h>
|
||||||
|
|
||||||
|
#include <drm/drm_aperture.h>
|
||||||
#include <drm/drm_atomic_helper.h>
|
#include <drm/drm_atomic_helper.h>
|
||||||
#include <drm/drm_fb_helper.h>
|
#include <drm/drm_fb_helper.h>
|
||||||
#include <drm/drm_probe_helper.h>
|
#include <drm/drm_probe_helper.h>
|
||||||
|
@ -90,6 +91,8 @@ MODULE_FIRMWARE("amdgpu/navi12_gpu_info.bin");
|
||||||
#define AMDGPU_MAX_RETRY_LIMIT 2
|
#define AMDGPU_MAX_RETRY_LIMIT 2
|
||||||
#define AMDGPU_RETRY_SRIOV_RESET(r) ((r) == -EBUSY || (r) == -ETIMEDOUT || (r) == -EINVAL)
|
#define AMDGPU_RETRY_SRIOV_RESET(r) ((r) == -EBUSY || (r) == -ETIMEDOUT || (r) == -EINVAL)
|
||||||
|
|
||||||
|
static const struct drm_driver amdgpu_kms_driver;
|
||||||
|
|
||||||
const char *amdgpu_asic_name[] = {
|
const char *amdgpu_asic_name[] = {
|
||||||
"TAHITI",
|
"TAHITI",
|
||||||
"PITCAIRN",
|
"PITCAIRN",
|
||||||
|
@ -3687,6 +3690,11 @@ int amdgpu_device_init(struct amdgpu_device *adev,
|
||||||
if (r)
|
if (r)
|
||||||
return r;
|
return r;
|
||||||
|
|
||||||
|
/* Get rid of things like offb */
|
||||||
|
r = drm_aperture_remove_conflicting_pci_framebuffers(adev->pdev, &amdgpu_kms_driver);
|
||||||
|
if (r)
|
||||||
|
return r;
|
||||||
|
|
||||||
/* Enable TMZ based on IP_VERSION */
|
/* Enable TMZ based on IP_VERSION */
|
||||||
amdgpu_gmc_tmz_set(adev);
|
amdgpu_gmc_tmz_set(adev);
|
||||||
|
|
||||||
|
|
|
@ -23,7 +23,6 @@
|
||||||
*/
|
*/
|
||||||
|
|
||||||
#include <drm/amdgpu_drm.h>
|
#include <drm/amdgpu_drm.h>
|
||||||
#include <drm/drm_aperture.h>
|
|
||||||
#include <drm/drm_drv.h>
|
#include <drm/drm_drv.h>
|
||||||
#include <drm/drm_fbdev_generic.h>
|
#include <drm/drm_fbdev_generic.h>
|
||||||
#include <drm/drm_gem.h>
|
#include <drm/drm_gem.h>
|
||||||
|
@ -2122,11 +2121,6 @@ static int amdgpu_pci_probe(struct pci_dev *pdev,
|
||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
/* Get rid of things like offb */
|
|
||||||
ret = drm_aperture_remove_conflicting_pci_framebuffers(pdev, &amdgpu_kms_driver);
|
|
||||||
if (ret)
|
|
||||||
return ret;
|
|
||||||
|
|
||||||
adev = devm_drm_dev_alloc(&pdev->dev, &amdgpu_kms_driver, typeof(*adev), ddev);
|
adev = devm_drm_dev_alloc(&pdev->dev, &amdgpu_kms_driver, typeof(*adev), ddev);
|
||||||
if (IS_ERR(adev))
|
if (IS_ERR(adev))
|
||||||
return PTR_ERR(adev);
|
return PTR_ERR(adev);
|
||||||
|
|
|
@ -470,8 +470,9 @@ static bool amdgpu_bo_validate_size(struct amdgpu_device *adev,
|
||||||
return true;
|
return true;
|
||||||
|
|
||||||
fail:
|
fail:
|
||||||
DRM_DEBUG("BO size %lu > total memory in domain: %llu\n", size,
|
if (man)
|
||||||
man->size);
|
DRM_DEBUG("BO size %lu > total memory in domain: %llu\n", size,
|
||||||
|
man->size);
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -391,8 +391,10 @@ int amdgpu_sync_push_to_job(struct amdgpu_sync *sync, struct amdgpu_job *job)
|
||||||
|
|
||||||
dma_fence_get(f);
|
dma_fence_get(f);
|
||||||
r = drm_sched_job_add_dependency(&job->base, f);
|
r = drm_sched_job_add_dependency(&job->base, f);
|
||||||
if (r)
|
if (r) {
|
||||||
|
dma_fence_put(f);
|
||||||
return r;
|
return r;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
|
@ -882,7 +882,7 @@ void amdgpu_vram_mgr_fini(struct amdgpu_device *adev)
|
||||||
kfree(rsv);
|
kfree(rsv);
|
||||||
|
|
||||||
list_for_each_entry_safe(rsv, temp, &mgr->reserved_pages, blocks) {
|
list_for_each_entry_safe(rsv, temp, &mgr->reserved_pages, blocks) {
|
||||||
drm_buddy_free_list(&mgr->mm, &rsv->blocks);
|
drm_buddy_free_list(&mgr->mm, &rsv->allocated);
|
||||||
kfree(rsv);
|
kfree(rsv);
|
||||||
}
|
}
|
||||||
drm_buddy_fini(&mgr->mm);
|
drm_buddy_fini(&mgr->mm);
|
||||||
|
|
|
@ -200,7 +200,7 @@ static int add_queue_mes(struct device_queue_manager *dqm, struct queue *q,
|
||||||
queue_input.wptr_addr = (uint64_t)q->properties.write_ptr;
|
queue_input.wptr_addr = (uint64_t)q->properties.write_ptr;
|
||||||
|
|
||||||
if (q->wptr_bo) {
|
if (q->wptr_bo) {
|
||||||
wptr_addr_off = (uint64_t)q->properties.write_ptr - (uint64_t)q->wptr_bo->kfd_bo->va;
|
wptr_addr_off = (uint64_t)q->properties.write_ptr & (PAGE_SIZE - 1);
|
||||||
queue_input.wptr_mc_addr = ((uint64_t)q->wptr_bo->tbo.resource->start << PAGE_SHIFT) + wptr_addr_off;
|
queue_input.wptr_mc_addr = ((uint64_t)q->wptr_bo->tbo.resource->start << PAGE_SHIFT) + wptr_addr_off;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -570,6 +570,15 @@ svm_range_vram_node_new(struct amdgpu_device *adev, struct svm_range *prange,
|
||||||
goto reserve_bo_failed;
|
goto reserve_bo_failed;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (clear) {
|
||||||
|
r = amdgpu_bo_sync_wait(bo, AMDGPU_FENCE_OWNER_KFD, false);
|
||||||
|
if (r) {
|
||||||
|
pr_debug("failed %d to sync bo\n", r);
|
||||||
|
amdgpu_bo_unreserve(bo);
|
||||||
|
goto reserve_bo_failed;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
r = dma_resv_reserve_fences(bo->tbo.base.resv, 1);
|
r = dma_resv_reserve_fences(bo->tbo.base.resv, 1);
|
||||||
if (r) {
|
if (r) {
|
||||||
pr_debug("failed %d to reserve bo\n", r);
|
pr_debug("failed %d to reserve bo\n", r);
|
||||||
|
|
|
@ -1261,7 +1261,8 @@ int smu_v13_0_set_fan_speed_rpm(struct smu_context *smu,
|
||||||
uint32_t speed)
|
uint32_t speed)
|
||||||
{
|
{
|
||||||
struct amdgpu_device *adev = smu->adev;
|
struct amdgpu_device *adev = smu->adev;
|
||||||
uint32_t tach_period, crystal_clock_freq;
|
uint32_t crystal_clock_freq = 2500;
|
||||||
|
uint32_t tach_period;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
if (!speed)
|
if (!speed)
|
||||||
|
@ -1271,7 +1272,6 @@ int smu_v13_0_set_fan_speed_rpm(struct smu_context *smu,
|
||||||
if (ret)
|
if (ret)
|
||||||
return ret;
|
return ret;
|
||||||
|
|
||||||
crystal_clock_freq = amdgpu_asic_get_xclk(adev);
|
|
||||||
tach_period = 60 * crystal_clock_freq * 10000 / (8 * speed);
|
tach_period = 60 * crystal_clock_freq * 10000 / (8 * speed);
|
||||||
WREG32_SOC15(THM, 0, regCG_TACH_CTRL,
|
WREG32_SOC15(THM, 0, regCG_TACH_CTRL,
|
||||||
REG_SET_FIELD(RREG32_SOC15(THM, 0, regCG_TACH_CTRL),
|
REG_SET_FIELD(RREG32_SOC15(THM, 0, regCG_TACH_CTRL),
|
||||||
|
@ -2298,6 +2298,10 @@ bool smu_v13_0_baco_is_support(struct smu_context *smu)
|
||||||
!smu_baco->platform_support)
|
!smu_baco->platform_support)
|
||||||
return false;
|
return false;
|
||||||
|
|
||||||
|
/* return true if ASIC is in BACO state already */
|
||||||
|
if (smu_v13_0_baco_get_state(smu) == SMU_BACO_STATE_ENTER)
|
||||||
|
return true;
|
||||||
|
|
||||||
if (smu_cmn_feature_is_supported(smu, SMU_FEATURE_BACO_BIT) &&
|
if (smu_cmn_feature_is_supported(smu, SMU_FEATURE_BACO_BIT) &&
|
||||||
!smu_cmn_feature_is_enabled(smu, SMU_FEATURE_BACO_BIT))
|
!smu_cmn_feature_is_enabled(smu, SMU_FEATURE_BACO_BIT))
|
||||||
return false;
|
return false;
|
||||||
|
|
|
@ -213,6 +213,7 @@ static struct cmn2asic_mapping smu_v13_0_0_feature_mask_map[SMU_FEATURE_COUNT] =
|
||||||
FEA_MAP(SOC_PCC),
|
FEA_MAP(SOC_PCC),
|
||||||
[SMU_FEATURE_DPM_VCLK_BIT] = {1, FEATURE_MM_DPM_BIT},
|
[SMU_FEATURE_DPM_VCLK_BIT] = {1, FEATURE_MM_DPM_BIT},
|
||||||
[SMU_FEATURE_DPM_DCLK_BIT] = {1, FEATURE_MM_DPM_BIT},
|
[SMU_FEATURE_DPM_DCLK_BIT] = {1, FEATURE_MM_DPM_BIT},
|
||||||
|
[SMU_FEATURE_PPT_BIT] = {1, FEATURE_THROTTLERS_BIT},
|
||||||
};
|
};
|
||||||
|
|
||||||
static struct cmn2asic_mapping smu_v13_0_0_table_map[SMU_TABLE_COUNT] = {
|
static struct cmn2asic_mapping smu_v13_0_0_table_map[SMU_TABLE_COUNT] = {
|
||||||
|
|
|
@ -192,6 +192,7 @@ static struct cmn2asic_mapping smu_v13_0_7_feature_mask_map[SMU_FEATURE_COUNT] =
|
||||||
FEA_MAP(SOC_PCC),
|
FEA_MAP(SOC_PCC),
|
||||||
[SMU_FEATURE_DPM_VCLK_BIT] = {1, FEATURE_MM_DPM_BIT},
|
[SMU_FEATURE_DPM_VCLK_BIT] = {1, FEATURE_MM_DPM_BIT},
|
||||||
[SMU_FEATURE_DPM_DCLK_BIT] = {1, FEATURE_MM_DPM_BIT},
|
[SMU_FEATURE_DPM_DCLK_BIT] = {1, FEATURE_MM_DPM_BIT},
|
||||||
|
[SMU_FEATURE_PPT_BIT] = {1, FEATURE_THROTTLERS_BIT},
|
||||||
};
|
};
|
||||||
|
|
||||||
static struct cmn2asic_mapping smu_v13_0_7_table_map[SMU_TABLE_COUNT] = {
|
static struct cmn2asic_mapping smu_v13_0_7_table_map[SMU_TABLE_COUNT] = {
|
||||||
|
|
|
@ -38,6 +38,25 @@ static void drm_block_free(struct drm_buddy *mm,
|
||||||
kmem_cache_free(slab_blocks, block);
|
kmem_cache_free(slab_blocks, block);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void list_insert_sorted(struct drm_buddy *mm,
|
||||||
|
struct drm_buddy_block *block)
|
||||||
|
{
|
||||||
|
struct drm_buddy_block *node;
|
||||||
|
struct list_head *head;
|
||||||
|
|
||||||
|
head = &mm->free_list[drm_buddy_block_order(block)];
|
||||||
|
if (list_empty(head)) {
|
||||||
|
list_add(&block->link, head);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
list_for_each_entry(node, head, link)
|
||||||
|
if (drm_buddy_block_offset(block) < drm_buddy_block_offset(node))
|
||||||
|
break;
|
||||||
|
|
||||||
|
__list_add(&block->link, node->link.prev, &node->link);
|
||||||
|
}
|
||||||
|
|
||||||
static void mark_allocated(struct drm_buddy_block *block)
|
static void mark_allocated(struct drm_buddy_block *block)
|
||||||
{
|
{
|
||||||
block->header &= ~DRM_BUDDY_HEADER_STATE;
|
block->header &= ~DRM_BUDDY_HEADER_STATE;
|
||||||
|
@ -52,8 +71,7 @@ static void mark_free(struct drm_buddy *mm,
|
||||||
block->header &= ~DRM_BUDDY_HEADER_STATE;
|
block->header &= ~DRM_BUDDY_HEADER_STATE;
|
||||||
block->header |= DRM_BUDDY_FREE;
|
block->header |= DRM_BUDDY_FREE;
|
||||||
|
|
||||||
list_add(&block->link,
|
list_insert_sorted(mm, block);
|
||||||
&mm->free_list[drm_buddy_block_order(block)]);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static void mark_split(struct drm_buddy_block *block)
|
static void mark_split(struct drm_buddy_block *block)
|
||||||
|
@ -387,20 +405,26 @@ err_undo:
|
||||||
}
|
}
|
||||||
|
|
||||||
static struct drm_buddy_block *
|
static struct drm_buddy_block *
|
||||||
get_maxblock(struct list_head *head)
|
get_maxblock(struct drm_buddy *mm, unsigned int order)
|
||||||
{
|
{
|
||||||
struct drm_buddy_block *max_block = NULL, *node;
|
struct drm_buddy_block *max_block = NULL, *node;
|
||||||
|
unsigned int i;
|
||||||
|
|
||||||
max_block = list_first_entry_or_null(head,
|
for (i = order; i <= mm->max_order; ++i) {
|
||||||
struct drm_buddy_block,
|
if (!list_empty(&mm->free_list[i])) {
|
||||||
link);
|
node = list_last_entry(&mm->free_list[i],
|
||||||
if (!max_block)
|
struct drm_buddy_block,
|
||||||
return NULL;
|
link);
|
||||||
|
if (!max_block) {
|
||||||
|
max_block = node;
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
list_for_each_entry(node, head, link) {
|
if (drm_buddy_block_offset(node) >
|
||||||
if (drm_buddy_block_offset(node) >
|
drm_buddy_block_offset(max_block)) {
|
||||||
drm_buddy_block_offset(max_block))
|
max_block = node;
|
||||||
max_block = node;
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return max_block;
|
return max_block;
|
||||||
|
@ -412,20 +436,23 @@ alloc_from_freelist(struct drm_buddy *mm,
|
||||||
unsigned long flags)
|
unsigned long flags)
|
||||||
{
|
{
|
||||||
struct drm_buddy_block *block = NULL;
|
struct drm_buddy_block *block = NULL;
|
||||||
unsigned int i;
|
unsigned int tmp;
|
||||||
int err;
|
int err;
|
||||||
|
|
||||||
for (i = order; i <= mm->max_order; ++i) {
|
if (flags & DRM_BUDDY_TOPDOWN_ALLOCATION) {
|
||||||
if (flags & DRM_BUDDY_TOPDOWN_ALLOCATION) {
|
block = get_maxblock(mm, order);
|
||||||
block = get_maxblock(&mm->free_list[i]);
|
if (block)
|
||||||
if (block)
|
/* Store the obtained block order */
|
||||||
break;
|
tmp = drm_buddy_block_order(block);
|
||||||
} else {
|
} else {
|
||||||
block = list_first_entry_or_null(&mm->free_list[i],
|
for (tmp = order; tmp <= mm->max_order; ++tmp) {
|
||||||
struct drm_buddy_block,
|
if (!list_empty(&mm->free_list[tmp])) {
|
||||||
link);
|
block = list_last_entry(&mm->free_list[tmp],
|
||||||
if (block)
|
struct drm_buddy_block,
|
||||||
break;
|
link);
|
||||||
|
if (block)
|
||||||
|
break;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -434,18 +461,18 @@ alloc_from_freelist(struct drm_buddy *mm,
|
||||||
|
|
||||||
BUG_ON(!drm_buddy_block_is_free(block));
|
BUG_ON(!drm_buddy_block_is_free(block));
|
||||||
|
|
||||||
while (i != order) {
|
while (tmp != order) {
|
||||||
err = split_block(mm, block);
|
err = split_block(mm, block);
|
||||||
if (unlikely(err))
|
if (unlikely(err))
|
||||||
goto err_undo;
|
goto err_undo;
|
||||||
|
|
||||||
block = block->right;
|
block = block->right;
|
||||||
i--;
|
tmp--;
|
||||||
}
|
}
|
||||||
return block;
|
return block;
|
||||||
|
|
||||||
err_undo:
|
err_undo:
|
||||||
if (i != order)
|
if (tmp != order)
|
||||||
__drm_buddy_free(mm, block);
|
__drm_buddy_free(mm, block);
|
||||||
return ERR_PTR(err);
|
return ERR_PTR(err);
|
||||||
}
|
}
|
||||||
|
|
|
@ -304,6 +304,12 @@ static const struct dmi_system_id orientation_data[] = {
|
||||||
DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "Lenovo ideapad D330-10IGM"),
|
DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "Lenovo ideapad D330-10IGM"),
|
||||||
},
|
},
|
||||||
.driver_data = (void *)&lcd1200x1920_rightside_up,
|
.driver_data = (void *)&lcd1200x1920_rightside_up,
|
||||||
|
}, { /* Lenovo Ideapad D330-10IGL (HD) */
|
||||||
|
.matches = {
|
||||||
|
DMI_EXACT_MATCH(DMI_SYS_VENDOR, "LENOVO"),
|
||||||
|
DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "Lenovo ideapad D330-10IGL"),
|
||||||
|
},
|
||||||
|
.driver_data = (void *)&lcd800x1280_rightside_up,
|
||||||
}, { /* Lenovo Yoga Book X90F / X91F / X91L */
|
}, { /* Lenovo Yoga Book X90F / X91F / X91L */
|
||||||
.matches = {
|
.matches = {
|
||||||
/* Non exact match to match all versions */
|
/* Non exact match to match all versions */
|
||||||
|
|
|
@ -1688,6 +1688,10 @@ void i915_gem_init__contexts(struct drm_i915_private *i915)
|
||||||
init_contexts(&i915->gem.contexts);
|
init_contexts(&i915->gem.contexts);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Note that this implicitly consumes the ctx reference, by placing
|
||||||
|
* the ctx in the context_xa.
|
||||||
|
*/
|
||||||
static void gem_context_register(struct i915_gem_context *ctx,
|
static void gem_context_register(struct i915_gem_context *ctx,
|
||||||
struct drm_i915_file_private *fpriv,
|
struct drm_i915_file_private *fpriv,
|
||||||
u32 id)
|
u32 id)
|
||||||
|
@ -1703,10 +1707,6 @@ static void gem_context_register(struct i915_gem_context *ctx,
|
||||||
snprintf(ctx->name, sizeof(ctx->name), "%s[%d]",
|
snprintf(ctx->name, sizeof(ctx->name), "%s[%d]",
|
||||||
current->comm, pid_nr(ctx->pid));
|
current->comm, pid_nr(ctx->pid));
|
||||||
|
|
||||||
/* And finally expose ourselves to userspace via the idr */
|
|
||||||
old = xa_store(&fpriv->context_xa, id, ctx, GFP_KERNEL);
|
|
||||||
WARN_ON(old);
|
|
||||||
|
|
||||||
spin_lock(&ctx->client->ctx_lock);
|
spin_lock(&ctx->client->ctx_lock);
|
||||||
list_add_tail_rcu(&ctx->client_link, &ctx->client->ctx_list);
|
list_add_tail_rcu(&ctx->client_link, &ctx->client->ctx_list);
|
||||||
spin_unlock(&ctx->client->ctx_lock);
|
spin_unlock(&ctx->client->ctx_lock);
|
||||||
|
@ -1714,6 +1714,10 @@ static void gem_context_register(struct i915_gem_context *ctx,
|
||||||
spin_lock(&i915->gem.contexts.lock);
|
spin_lock(&i915->gem.contexts.lock);
|
||||||
list_add_tail(&ctx->link, &i915->gem.contexts.list);
|
list_add_tail(&ctx->link, &i915->gem.contexts.list);
|
||||||
spin_unlock(&i915->gem.contexts.lock);
|
spin_unlock(&i915->gem.contexts.lock);
|
||||||
|
|
||||||
|
/* And finally expose ourselves to userspace via the idr */
|
||||||
|
old = xa_store(&fpriv->context_xa, id, ctx, GFP_KERNEL);
|
||||||
|
WARN_ON(old);
|
||||||
}
|
}
|
||||||
|
|
||||||
int i915_gem_context_open(struct drm_i915_private *i915,
|
int i915_gem_context_open(struct drm_i915_private *i915,
|
||||||
|
@ -2199,14 +2203,22 @@ finalize_create_context_locked(struct drm_i915_file_private *file_priv,
|
||||||
if (IS_ERR(ctx))
|
if (IS_ERR(ctx))
|
||||||
return ctx;
|
return ctx;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* One for the xarray and one for the caller. We need to grab
|
||||||
|
* the reference *prior* to making the ctx visble to userspace
|
||||||
|
* in gem_context_register(), as at any point after that
|
||||||
|
* userspace can try to race us with another thread destroying
|
||||||
|
* the context under our feet.
|
||||||
|
*/
|
||||||
|
i915_gem_context_get(ctx);
|
||||||
|
|
||||||
gem_context_register(ctx, file_priv, id);
|
gem_context_register(ctx, file_priv, id);
|
||||||
|
|
||||||
old = xa_erase(&file_priv->proto_context_xa, id);
|
old = xa_erase(&file_priv->proto_context_xa, id);
|
||||||
GEM_BUG_ON(old != pc);
|
GEM_BUG_ON(old != pc);
|
||||||
proto_context_close(file_priv->dev_priv, pc);
|
proto_context_close(file_priv->dev_priv, pc);
|
||||||
|
|
||||||
/* One for the xarray and one for the caller */
|
return ctx;
|
||||||
return i915_gem_context_get(ctx);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
struct i915_gem_context *
|
struct i915_gem_context *
|
||||||
|
|
|
@ -406,10 +406,10 @@
|
||||||
#define GEN9_WM_CHICKEN3 _MMIO(0x5588)
|
#define GEN9_WM_CHICKEN3 _MMIO(0x5588)
|
||||||
#define GEN9_FACTOR_IN_CLR_VAL_HIZ (1 << 9)
|
#define GEN9_FACTOR_IN_CLR_VAL_HIZ (1 << 9)
|
||||||
|
|
||||||
#define CHICKEN_RASTER_1 _MMIO(0x6204)
|
#define CHICKEN_RASTER_1 MCR_REG(0x6204)
|
||||||
#define DIS_SF_ROUND_NEAREST_EVEN REG_BIT(8)
|
#define DIS_SF_ROUND_NEAREST_EVEN REG_BIT(8)
|
||||||
|
|
||||||
#define CHICKEN_RASTER_2 _MMIO(0x6208)
|
#define CHICKEN_RASTER_2 MCR_REG(0x6208)
|
||||||
#define TBIMR_FAST_CLIP REG_BIT(5)
|
#define TBIMR_FAST_CLIP REG_BIT(5)
|
||||||
|
|
||||||
#define VFLSKPD MCR_REG(0x62a8)
|
#define VFLSKPD MCR_REG(0x62a8)
|
||||||
|
|
|
@ -278,6 +278,7 @@ out:
|
||||||
static int gen6_hw_domain_reset(struct intel_gt *gt, u32 hw_domain_mask)
|
static int gen6_hw_domain_reset(struct intel_gt *gt, u32 hw_domain_mask)
|
||||||
{
|
{
|
||||||
struct intel_uncore *uncore = gt->uncore;
|
struct intel_uncore *uncore = gt->uncore;
|
||||||
|
int loops = 2;
|
||||||
int err;
|
int err;
|
||||||
|
|
||||||
/*
|
/*
|
||||||
|
@ -285,18 +286,39 @@ static int gen6_hw_domain_reset(struct intel_gt *gt, u32 hw_domain_mask)
|
||||||
* for fifo space for the write or forcewake the chip for
|
* for fifo space for the write or forcewake the chip for
|
||||||
* the read
|
* the read
|
||||||
*/
|
*/
|
||||||
intel_uncore_write_fw(uncore, GEN6_GDRST, hw_domain_mask);
|
do {
|
||||||
|
intel_uncore_write_fw(uncore, GEN6_GDRST, hw_domain_mask);
|
||||||
|
|
||||||
/* Wait for the device to ack the reset requests */
|
/*
|
||||||
err = __intel_wait_for_register_fw(uncore,
|
* Wait for the device to ack the reset requests.
|
||||||
GEN6_GDRST, hw_domain_mask, 0,
|
*
|
||||||
500, 0,
|
* On some platforms, e.g. Jasperlake, we see that the
|
||||||
NULL);
|
* engine register state is not cleared until shortly after
|
||||||
|
* GDRST reports completion, causing a failure as we try
|
||||||
|
* to immediately resume while the internal state is still
|
||||||
|
* in flux. If we immediately repeat the reset, the second
|
||||||
|
* reset appears to serialise with the first, and since
|
||||||
|
* it is a no-op, the registers should retain their reset
|
||||||
|
* value. However, there is still a concern that upon
|
||||||
|
* leaving the second reset, the internal engine state
|
||||||
|
* is still in flux and not ready for resuming.
|
||||||
|
*/
|
||||||
|
err = __intel_wait_for_register_fw(uncore, GEN6_GDRST,
|
||||||
|
hw_domain_mask, 0,
|
||||||
|
2000, 0,
|
||||||
|
NULL);
|
||||||
|
} while (err == 0 && --loops);
|
||||||
if (err)
|
if (err)
|
||||||
GT_TRACE(gt,
|
GT_TRACE(gt,
|
||||||
"Wait for 0x%08x engines reset failed\n",
|
"Wait for 0x%08x engines reset failed\n",
|
||||||
hw_domain_mask);
|
hw_domain_mask);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* As we have observed that the engine state is still volatile
|
||||||
|
* after GDRST is acked, impose a small delay to let everything settle.
|
||||||
|
*/
|
||||||
|
udelay(50);
|
||||||
|
|
||||||
return err;
|
return err;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -645,7 +645,7 @@ static void icl_ctx_workarounds_init(struct intel_engine_cs *engine,
|
||||||
static void dg2_ctx_gt_tuning_init(struct intel_engine_cs *engine,
|
static void dg2_ctx_gt_tuning_init(struct intel_engine_cs *engine,
|
||||||
struct i915_wa_list *wal)
|
struct i915_wa_list *wal)
|
||||||
{
|
{
|
||||||
wa_masked_en(wal, CHICKEN_RASTER_2, TBIMR_FAST_CLIP);
|
wa_mcr_masked_en(wal, CHICKEN_RASTER_2, TBIMR_FAST_CLIP);
|
||||||
wa_mcr_write_clr_set(wal, XEHP_L3SQCREG5, L3_PWM_TIMER_INIT_VAL_MASK,
|
wa_mcr_write_clr_set(wal, XEHP_L3SQCREG5, L3_PWM_TIMER_INIT_VAL_MASK,
|
||||||
REG_FIELD_PREP(L3_PWM_TIMER_INIT_VAL_MASK, 0x7f));
|
REG_FIELD_PREP(L3_PWM_TIMER_INIT_VAL_MASK, 0x7f));
|
||||||
wa_mcr_add(wal,
|
wa_mcr_add(wal,
|
||||||
|
@ -775,7 +775,7 @@ static void dg2_ctx_workarounds_init(struct intel_engine_cs *engine,
|
||||||
wa_masked_field_set(wal, VF_PREEMPTION, PREEMPTION_VERTEX_COUNT, 0x4000);
|
wa_masked_field_set(wal, VF_PREEMPTION, PREEMPTION_VERTEX_COUNT, 0x4000);
|
||||||
|
|
||||||
/* Wa_15010599737:dg2 */
|
/* Wa_15010599737:dg2 */
|
||||||
wa_masked_en(wal, CHICKEN_RASTER_1, DIS_SF_ROUND_NEAREST_EVEN);
|
wa_mcr_masked_en(wal, CHICKEN_RASTER_1, DIS_SF_ROUND_NEAREST_EVEN);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void fakewa_disable_nestedbb_mode(struct intel_engine_cs *engine,
|
static void fakewa_disable_nestedbb_mode(struct intel_engine_cs *engine,
|
||||||
|
|
|
@ -2116,7 +2116,7 @@ int i915_vma_unbind_async(struct i915_vma *vma, bool trylock_vm)
|
||||||
if (!obj->mm.rsgt)
|
if (!obj->mm.rsgt)
|
||||||
return -EBUSY;
|
return -EBUSY;
|
||||||
|
|
||||||
err = dma_resv_reserve_fences(obj->base.resv, 1);
|
err = dma_resv_reserve_fences(obj->base.resv, 2);
|
||||||
if (err)
|
if (err)
|
||||||
return -EBUSY;
|
return -EBUSY;
|
||||||
|
|
||||||
|
|
|
@ -876,7 +876,8 @@ static void a6xx_gmu_rpmh_off(struct a6xx_gmu *gmu)
|
||||||
#define GBIF_CLIENT_HALT_MASK BIT(0)
|
#define GBIF_CLIENT_HALT_MASK BIT(0)
|
||||||
#define GBIF_ARB_HALT_MASK BIT(1)
|
#define GBIF_ARB_HALT_MASK BIT(1)
|
||||||
|
|
||||||
static void a6xx_bus_clear_pending_transactions(struct adreno_gpu *adreno_gpu)
|
static void a6xx_bus_clear_pending_transactions(struct adreno_gpu *adreno_gpu,
|
||||||
|
bool gx_off)
|
||||||
{
|
{
|
||||||
struct msm_gpu *gpu = &adreno_gpu->base;
|
struct msm_gpu *gpu = &adreno_gpu->base;
|
||||||
|
|
||||||
|
@ -889,9 +890,11 @@ static void a6xx_bus_clear_pending_transactions(struct adreno_gpu *adreno_gpu)
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Halt the gx side of GBIF */
|
if (gx_off) {
|
||||||
gpu_write(gpu, REG_A6XX_RBBM_GBIF_HALT, 1);
|
/* Halt the gx side of GBIF */
|
||||||
spin_until(gpu_read(gpu, REG_A6XX_RBBM_GBIF_HALT_ACK) & 1);
|
gpu_write(gpu, REG_A6XX_RBBM_GBIF_HALT, 1);
|
||||||
|
spin_until(gpu_read(gpu, REG_A6XX_RBBM_GBIF_HALT_ACK) & 1);
|
||||||
|
}
|
||||||
|
|
||||||
/* Halt new client requests on GBIF */
|
/* Halt new client requests on GBIF */
|
||||||
gpu_write(gpu, REG_A6XX_GBIF_HALT, GBIF_CLIENT_HALT_MASK);
|
gpu_write(gpu, REG_A6XX_GBIF_HALT, GBIF_CLIENT_HALT_MASK);
|
||||||
|
@ -929,7 +932,7 @@ static void a6xx_gmu_force_off(struct a6xx_gmu *gmu)
|
||||||
/* Halt the gmu cm3 core */
|
/* Halt the gmu cm3 core */
|
||||||
gmu_write(gmu, REG_A6XX_GMU_CM3_SYSRESET, 1);
|
gmu_write(gmu, REG_A6XX_GMU_CM3_SYSRESET, 1);
|
||||||
|
|
||||||
a6xx_bus_clear_pending_transactions(adreno_gpu);
|
a6xx_bus_clear_pending_transactions(adreno_gpu, true);
|
||||||
|
|
||||||
/* Reset GPU core blocks */
|
/* Reset GPU core blocks */
|
||||||
gpu_write(gpu, REG_A6XX_RBBM_SW_RESET_CMD, 1);
|
gpu_write(gpu, REG_A6XX_RBBM_SW_RESET_CMD, 1);
|
||||||
|
@ -1083,7 +1086,7 @@ static void a6xx_gmu_shutdown(struct a6xx_gmu *gmu)
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
a6xx_bus_clear_pending_transactions(adreno_gpu);
|
a6xx_bus_clear_pending_transactions(adreno_gpu, a6xx_gpu->hung);
|
||||||
|
|
||||||
/* tell the GMU we want to slumber */
|
/* tell the GMU we want to slumber */
|
||||||
ret = a6xx_gmu_notify_slumber(gmu);
|
ret = a6xx_gmu_notify_slumber(gmu);
|
||||||
|
|
|
@ -1270,6 +1270,12 @@ static void a6xx_recover(struct msm_gpu *gpu)
|
||||||
if (hang_debug)
|
if (hang_debug)
|
||||||
a6xx_dump(gpu);
|
a6xx_dump(gpu);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* To handle recovery specific sequences during the rpm suspend we are
|
||||||
|
* about to trigger
|
||||||
|
*/
|
||||||
|
a6xx_gpu->hung = true;
|
||||||
|
|
||||||
/* Halt SQE first */
|
/* Halt SQE first */
|
||||||
gpu_write(gpu, REG_A6XX_CP_SQE_CNTL, 3);
|
gpu_write(gpu, REG_A6XX_CP_SQE_CNTL, 3);
|
||||||
|
|
||||||
|
@ -1312,6 +1318,7 @@ static void a6xx_recover(struct msm_gpu *gpu)
|
||||||
mutex_unlock(&gpu->active_lock);
|
mutex_unlock(&gpu->active_lock);
|
||||||
|
|
||||||
msm_gpu_hw_init(gpu);
|
msm_gpu_hw_init(gpu);
|
||||||
|
a6xx_gpu->hung = false;
|
||||||
}
|
}
|
||||||
|
|
||||||
static const char *a6xx_uche_fault_block(struct msm_gpu *gpu, u32 mid)
|
static const char *a6xx_uche_fault_block(struct msm_gpu *gpu, u32 mid)
|
||||||
|
|
|
@ -32,6 +32,7 @@ struct a6xx_gpu {
|
||||||
void *llc_slice;
|
void *llc_slice;
|
||||||
void *htw_llc_slice;
|
void *htw_llc_slice;
|
||||||
bool have_mmu500;
|
bool have_mmu500;
|
||||||
|
bool hung;
|
||||||
};
|
};
|
||||||
|
|
||||||
#define to_a6xx_gpu(x) container_of(x, struct a6xx_gpu, base)
|
#define to_a6xx_gpu(x) container_of(x, struct a6xx_gpu, base)
|
||||||
|
|
|
@ -29,11 +29,9 @@ enum {
|
||||||
ADRENO_FW_MAX,
|
ADRENO_FW_MAX,
|
||||||
};
|
};
|
||||||
|
|
||||||
enum adreno_quirks {
|
#define ADRENO_QUIRK_TWO_PASS_USE_WFI BIT(0)
|
||||||
ADRENO_QUIRK_TWO_PASS_USE_WFI = 1,
|
#define ADRENO_QUIRK_FAULT_DETECT_MASK BIT(1)
|
||||||
ADRENO_QUIRK_FAULT_DETECT_MASK = 2,
|
#define ADRENO_QUIRK_LMLOADKILL_DISABLE BIT(2)
|
||||||
ADRENO_QUIRK_LMLOADKILL_DISABLE = 3,
|
|
||||||
};
|
|
||||||
|
|
||||||
struct adreno_rev {
|
struct adreno_rev {
|
||||||
uint8_t core;
|
uint8_t core;
|
||||||
|
@ -65,7 +63,7 @@ struct adreno_info {
|
||||||
const char *name;
|
const char *name;
|
||||||
const char *fw[ADRENO_FW_MAX];
|
const char *fw[ADRENO_FW_MAX];
|
||||||
uint32_t gmem;
|
uint32_t gmem;
|
||||||
enum adreno_quirks quirks;
|
u64 quirks;
|
||||||
struct msm_gpu *(*init)(struct drm_device *dev);
|
struct msm_gpu *(*init)(struct drm_device *dev);
|
||||||
const char *zapfw;
|
const char *zapfw;
|
||||||
u32 inactive_period;
|
u32 inactive_period;
|
||||||
|
|
|
@ -132,7 +132,6 @@ static void dpu_encoder_phys_wb_set_qos(struct dpu_encoder_phys *phys_enc)
|
||||||
* dpu_encoder_phys_wb_setup_fb - setup output framebuffer
|
* dpu_encoder_phys_wb_setup_fb - setup output framebuffer
|
||||||
* @phys_enc: Pointer to physical encoder
|
* @phys_enc: Pointer to physical encoder
|
||||||
* @fb: Pointer to output framebuffer
|
* @fb: Pointer to output framebuffer
|
||||||
* @wb_roi: Pointer to output region of interest
|
|
||||||
*/
|
*/
|
||||||
static void dpu_encoder_phys_wb_setup_fb(struct dpu_encoder_phys *phys_enc,
|
static void dpu_encoder_phys_wb_setup_fb(struct dpu_encoder_phys *phys_enc,
|
||||||
struct drm_framebuffer *fb)
|
struct drm_framebuffer *fb)
|
||||||
|
@ -692,7 +691,7 @@ static void dpu_encoder_phys_wb_init_ops(struct dpu_encoder_phys_ops *ops)
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* dpu_encoder_phys_wb_init - initialize writeback encoder
|
* dpu_encoder_phys_wb_init - initialize writeback encoder
|
||||||
* @init: Pointer to init info structure with initialization params
|
* @p: Pointer to init info structure with initialization params
|
||||||
*/
|
*/
|
||||||
struct dpu_encoder_phys *dpu_encoder_phys_wb_init(
|
struct dpu_encoder_phys *dpu_encoder_phys_wb_init(
|
||||||
struct dpu_enc_phys_init_params *p)
|
struct dpu_enc_phys_init_params *p)
|
||||||
|
|
|
@ -423,6 +423,10 @@ void dp_aux_isr(struct drm_dp_aux *dp_aux)
|
||||||
|
|
||||||
isr = dp_catalog_aux_get_irq(aux->catalog);
|
isr = dp_catalog_aux_get_irq(aux->catalog);
|
||||||
|
|
||||||
|
/* no interrupts pending, return immediately */
|
||||||
|
if (!isr)
|
||||||
|
return;
|
||||||
|
|
||||||
if (!aux->cmd_busy)
|
if (!aux->cmd_busy)
|
||||||
return;
|
return;
|
||||||
|
|
||||||
|
|
|
@ -532,11 +532,19 @@ static int msm_hdmi_dev_probe(struct platform_device *pdev)
|
||||||
|
|
||||||
ret = devm_pm_runtime_enable(&pdev->dev);
|
ret = devm_pm_runtime_enable(&pdev->dev);
|
||||||
if (ret)
|
if (ret)
|
||||||
return ret;
|
goto err_put_phy;
|
||||||
|
|
||||||
platform_set_drvdata(pdev, hdmi);
|
platform_set_drvdata(pdev, hdmi);
|
||||||
|
|
||||||
return component_add(&pdev->dev, &msm_hdmi_ops);
|
ret = component_add(&pdev->dev, &msm_hdmi_ops);
|
||||||
|
if (ret)
|
||||||
|
goto err_put_phy;
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
|
||||||
|
err_put_phy:
|
||||||
|
msm_hdmi_put_phy(hdmi);
|
||||||
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int msm_hdmi_dev_remove(struct platform_device *pdev)
|
static int msm_hdmi_dev_remove(struct platform_device *pdev)
|
||||||
|
|
|
@ -1278,7 +1278,7 @@ void msm_drv_shutdown(struct platform_device *pdev)
|
||||||
* msm_drm_init, drm_dev->registered is used as an indicator that the
|
* msm_drm_init, drm_dev->registered is used as an indicator that the
|
||||||
* shutdown will be successful.
|
* shutdown will be successful.
|
||||||
*/
|
*/
|
||||||
if (drm && drm->registered)
|
if (drm && drm->registered && priv->kms)
|
||||||
drm_atomic_helper_shutdown(drm);
|
drm_atomic_helper_shutdown(drm);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -47,15 +47,17 @@ struct msm_mdss {
|
||||||
static int msm_mdss_parse_data_bus_icc_path(struct device *dev,
|
static int msm_mdss_parse_data_bus_icc_path(struct device *dev,
|
||||||
struct msm_mdss *msm_mdss)
|
struct msm_mdss *msm_mdss)
|
||||||
{
|
{
|
||||||
struct icc_path *path0 = of_icc_get(dev, "mdp0-mem");
|
struct icc_path *path0;
|
||||||
struct icc_path *path1 = of_icc_get(dev, "mdp1-mem");
|
struct icc_path *path1;
|
||||||
|
|
||||||
|
path0 = of_icc_get(dev, "mdp0-mem");
|
||||||
if (IS_ERR_OR_NULL(path0))
|
if (IS_ERR_OR_NULL(path0))
|
||||||
return PTR_ERR_OR_ZERO(path0);
|
return PTR_ERR_OR_ZERO(path0);
|
||||||
|
|
||||||
msm_mdss->path[0] = path0;
|
msm_mdss->path[0] = path0;
|
||||||
msm_mdss->num_paths = 1;
|
msm_mdss->num_paths = 1;
|
||||||
|
|
||||||
|
path1 = of_icc_get(dev, "mdp1-mem");
|
||||||
if (!IS_ERR_OR_NULL(path1)) {
|
if (!IS_ERR_OR_NULL(path1)) {
|
||||||
msm_mdss->path[1] = path1;
|
msm_mdss->path[1] = path1;
|
||||||
msm_mdss->num_paths++;
|
msm_mdss->num_paths++;
|
||||||
|
|
|
@ -1,613 +0,0 @@
|
||||||
/*
|
|
||||||
* Copyright © 2007 David Airlie
|
|
||||||
*
|
|
||||||
* Permission is hereby granted, free of charge, to any person obtaining a
|
|
||||||
* copy of this software and associated documentation files (the "Software"),
|
|
||||||
* to deal in the Software without restriction, including without limitation
|
|
||||||
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
|
|
||||||
* and/or sell copies of the Software, and to permit persons to whom the
|
|
||||||
* Software is furnished to do so, subject to the following conditions:
|
|
||||||
*
|
|
||||||
* The above copyright notice and this permission notice (including the next
|
|
||||||
* paragraph) shall be included in all copies or substantial portions of the
|
|
||||||
* Software.
|
|
||||||
*
|
|
||||||
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
||||||
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
||||||
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
|
||||||
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
||||||
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
|
||||||
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
|
|
||||||
* DEALINGS IN THE SOFTWARE.
|
|
||||||
*
|
|
||||||
* Authors:
|
|
||||||
* David Airlie
|
|
||||||
*/
|
|
||||||
|
|
||||||
#include <linux/module.h>
|
|
||||||
#include <linux/kernel.h>
|
|
||||||
#include <linux/errno.h>
|
|
||||||
#include <linux/string.h>
|
|
||||||
#include <linux/mm.h>
|
|
||||||
#include <linux/tty.h>
|
|
||||||
#include <linux/sysrq.h>
|
|
||||||
#include <linux/delay.h>
|
|
||||||
#include <linux/init.h>
|
|
||||||
#include <linux/screen_info.h>
|
|
||||||
#include <linux/vga_switcheroo.h>
|
|
||||||
#include <linux/console.h>
|
|
||||||
|
|
||||||
#include <drm/drm_crtc.h>
|
|
||||||
#include <drm/drm_crtc_helper.h>
|
|
||||||
#include <drm/drm_probe_helper.h>
|
|
||||||
#include <drm/drm_fb_helper.h>
|
|
||||||
#include <drm/drm_fourcc.h>
|
|
||||||
#include <drm/drm_atomic.h>
|
|
||||||
|
|
||||||
#include "nouveau_drv.h"
|
|
||||||
#include "nouveau_gem.h"
|
|
||||||
#include "nouveau_bo.h"
|
|
||||||
#include "nouveau_fbcon.h"
|
|
||||||
#include "nouveau_chan.h"
|
|
||||||
#include "nouveau_vmm.h"
|
|
||||||
|
|
||||||
#include "nouveau_crtc.h"
|
|
||||||
|
|
||||||
MODULE_PARM_DESC(nofbaccel, "Disable fbcon acceleration");
|
|
||||||
int nouveau_nofbaccel = 0;
|
|
||||||
module_param_named(nofbaccel, nouveau_nofbaccel, int, 0400);
|
|
||||||
|
|
||||||
MODULE_PARM_DESC(fbcon_bpp, "fbcon bits-per-pixel (default: auto)");
|
|
||||||
static int nouveau_fbcon_bpp;
|
|
||||||
module_param_named(fbcon_bpp, nouveau_fbcon_bpp, int, 0400);
|
|
||||||
|
|
||||||
static void
|
|
||||||
nouveau_fbcon_fillrect(struct fb_info *info, const struct fb_fillrect *rect)
|
|
||||||
{
|
|
||||||
struct nouveau_fbdev *fbcon = info->par;
|
|
||||||
struct nouveau_drm *drm = nouveau_drm(fbcon->helper.dev);
|
|
||||||
struct nvif_device *device = &drm->client.device;
|
|
||||||
int ret;
|
|
||||||
|
|
||||||
if (info->state != FBINFO_STATE_RUNNING)
|
|
||||||
return;
|
|
||||||
|
|
||||||
ret = -ENODEV;
|
|
||||||
if (!in_interrupt() && !(info->flags & FBINFO_HWACCEL_DISABLED) &&
|
|
||||||
mutex_trylock(&drm->client.mutex)) {
|
|
||||||
if (device->info.family < NV_DEVICE_INFO_V0_TESLA)
|
|
||||||
ret = nv04_fbcon_fillrect(info, rect);
|
|
||||||
else
|
|
||||||
if (device->info.family < NV_DEVICE_INFO_V0_FERMI)
|
|
||||||
ret = nv50_fbcon_fillrect(info, rect);
|
|
||||||
else
|
|
||||||
ret = nvc0_fbcon_fillrect(info, rect);
|
|
||||||
mutex_unlock(&drm->client.mutex);
|
|
||||||
}
|
|
||||||
|
|
||||||
if (ret == 0)
|
|
||||||
return;
|
|
||||||
|
|
||||||
if (ret != -ENODEV)
|
|
||||||
nouveau_fbcon_gpu_lockup(info);
|
|
||||||
drm_fb_helper_cfb_fillrect(info, rect);
|
|
||||||
}
|
|
||||||
|
|
||||||
static void
|
|
||||||
nouveau_fbcon_copyarea(struct fb_info *info, const struct fb_copyarea *image)
|
|
||||||
{
|
|
||||||
struct nouveau_fbdev *fbcon = info->par;
|
|
||||||
struct nouveau_drm *drm = nouveau_drm(fbcon->helper.dev);
|
|
||||||
struct nvif_device *device = &drm->client.device;
|
|
||||||
int ret;
|
|
||||||
|
|
||||||
if (info->state != FBINFO_STATE_RUNNING)
|
|
||||||
return;
|
|
||||||
|
|
||||||
ret = -ENODEV;
|
|
||||||
if (!in_interrupt() && !(info->flags & FBINFO_HWACCEL_DISABLED) &&
|
|
||||||
mutex_trylock(&drm->client.mutex)) {
|
|
||||||
if (device->info.family < NV_DEVICE_INFO_V0_TESLA)
|
|
||||||
ret = nv04_fbcon_copyarea(info, image);
|
|
||||||
else
|
|
||||||
if (device->info.family < NV_DEVICE_INFO_V0_FERMI)
|
|
||||||
ret = nv50_fbcon_copyarea(info, image);
|
|
||||||
else
|
|
||||||
ret = nvc0_fbcon_copyarea(info, image);
|
|
||||||
mutex_unlock(&drm->client.mutex);
|
|
||||||
}
|
|
||||||
|
|
||||||
if (ret == 0)
|
|
||||||
return;
|
|
||||||
|
|
||||||
if (ret != -ENODEV)
|
|
||||||
nouveau_fbcon_gpu_lockup(info);
|
|
||||||
drm_fb_helper_cfb_copyarea(info, image);
|
|
||||||
}
|
|
||||||
|
|
||||||
static void
|
|
||||||
nouveau_fbcon_imageblit(struct fb_info *info, const struct fb_image *image)
|
|
||||||
{
|
|
||||||
struct nouveau_fbdev *fbcon = info->par;
|
|
||||||
struct nouveau_drm *drm = nouveau_drm(fbcon->helper.dev);
|
|
||||||
struct nvif_device *device = &drm->client.device;
|
|
||||||
int ret;
|
|
||||||
|
|
||||||
if (info->state != FBINFO_STATE_RUNNING)
|
|
||||||
return;
|
|
||||||
|
|
||||||
ret = -ENODEV;
|
|
||||||
if (!in_interrupt() && !(info->flags & FBINFO_HWACCEL_DISABLED) &&
|
|
||||||
mutex_trylock(&drm->client.mutex)) {
|
|
||||||
if (device->info.family < NV_DEVICE_INFO_V0_TESLA)
|
|
||||||
ret = nv04_fbcon_imageblit(info, image);
|
|
||||||
else
|
|
||||||
if (device->info.family < NV_DEVICE_INFO_V0_FERMI)
|
|
||||||
ret = nv50_fbcon_imageblit(info, image);
|
|
||||||
else
|
|
||||||
ret = nvc0_fbcon_imageblit(info, image);
|
|
||||||
mutex_unlock(&drm->client.mutex);
|
|
||||||
}
|
|
||||||
|
|
||||||
if (ret == 0)
|
|
||||||
return;
|
|
||||||
|
|
||||||
if (ret != -ENODEV)
|
|
||||||
nouveau_fbcon_gpu_lockup(info);
|
|
||||||
drm_fb_helper_cfb_imageblit(info, image);
|
|
||||||
}
|
|
||||||
|
|
||||||
static int
|
|
||||||
nouveau_fbcon_sync(struct fb_info *info)
|
|
||||||
{
|
|
||||||
struct nouveau_fbdev *fbcon = info->par;
|
|
||||||
struct nouveau_drm *drm = nouveau_drm(fbcon->helper.dev);
|
|
||||||
struct nouveau_channel *chan = drm->channel;
|
|
||||||
int ret;
|
|
||||||
|
|
||||||
if (!chan || !chan->accel_done || in_interrupt() ||
|
|
||||||
info->state != FBINFO_STATE_RUNNING ||
|
|
||||||
info->flags & FBINFO_HWACCEL_DISABLED)
|
|
||||||
return 0;
|
|
||||||
|
|
||||||
if (!mutex_trylock(&drm->client.mutex))
|
|
||||||
return 0;
|
|
||||||
|
|
||||||
ret = nouveau_channel_idle(chan);
|
|
||||||
mutex_unlock(&drm->client.mutex);
|
|
||||||
if (ret) {
|
|
||||||
nouveau_fbcon_gpu_lockup(info);
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
chan->accel_done = false;
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
static int
|
|
||||||
nouveau_fbcon_open(struct fb_info *info, int user)
|
|
||||||
{
|
|
||||||
struct nouveau_fbdev *fbcon = info->par;
|
|
||||||
struct nouveau_drm *drm = nouveau_drm(fbcon->helper.dev);
|
|
||||||
int ret = pm_runtime_get_sync(drm->dev->dev);
|
|
||||||
if (ret < 0 && ret != -EACCES) {
|
|
||||||
pm_runtime_put(drm->dev->dev);
|
|
||||||
return ret;
|
|
||||||
}
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
static int
|
|
||||||
nouveau_fbcon_release(struct fb_info *info, int user)
|
|
||||||
{
|
|
||||||
struct nouveau_fbdev *fbcon = info->par;
|
|
||||||
struct nouveau_drm *drm = nouveau_drm(fbcon->helper.dev);
|
|
||||||
pm_runtime_put(drm->dev->dev);
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
static const struct fb_ops nouveau_fbcon_ops = {
|
|
||||||
.owner = THIS_MODULE,
|
|
||||||
DRM_FB_HELPER_DEFAULT_OPS,
|
|
||||||
.fb_open = nouveau_fbcon_open,
|
|
||||||
.fb_release = nouveau_fbcon_release,
|
|
||||||
.fb_fillrect = nouveau_fbcon_fillrect,
|
|
||||||
.fb_copyarea = nouveau_fbcon_copyarea,
|
|
||||||
.fb_imageblit = nouveau_fbcon_imageblit,
|
|
||||||
.fb_sync = nouveau_fbcon_sync,
|
|
||||||
};
|
|
||||||
|
|
||||||
static const struct fb_ops nouveau_fbcon_sw_ops = {
|
|
||||||
.owner = THIS_MODULE,
|
|
||||||
DRM_FB_HELPER_DEFAULT_OPS,
|
|
||||||
.fb_open = nouveau_fbcon_open,
|
|
||||||
.fb_release = nouveau_fbcon_release,
|
|
||||||
.fb_fillrect = drm_fb_helper_cfb_fillrect,
|
|
||||||
.fb_copyarea = drm_fb_helper_cfb_copyarea,
|
|
||||||
.fb_imageblit = drm_fb_helper_cfb_imageblit,
|
|
||||||
};
|
|
||||||
|
|
||||||
void
|
|
||||||
nouveau_fbcon_accel_save_disable(struct drm_device *dev)
|
|
||||||
{
|
|
||||||
struct nouveau_drm *drm = nouveau_drm(dev);
|
|
||||||
if (drm->fbcon && drm->fbcon->helper.info) {
|
|
||||||
drm->fbcon->saved_flags = drm->fbcon->helper.info->flags;
|
|
||||||
drm->fbcon->helper.info->flags |= FBINFO_HWACCEL_DISABLED;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
void
|
|
||||||
nouveau_fbcon_accel_restore(struct drm_device *dev)
|
|
||||||
{
|
|
||||||
struct nouveau_drm *drm = nouveau_drm(dev);
|
|
||||||
if (drm->fbcon && drm->fbcon->helper.info)
|
|
||||||
drm->fbcon->helper.info->flags = drm->fbcon->saved_flags;
|
|
||||||
}
|
|
||||||
|
|
||||||
static void
|
|
||||||
nouveau_fbcon_accel_fini(struct drm_device *dev)
|
|
||||||
{
|
|
||||||
struct nouveau_drm *drm = nouveau_drm(dev);
|
|
||||||
struct nouveau_fbdev *fbcon = drm->fbcon;
|
|
||||||
if (fbcon && drm->channel) {
|
|
||||||
console_lock();
|
|
||||||
if (fbcon->helper.info)
|
|
||||||
fbcon->helper.info->flags |= FBINFO_HWACCEL_DISABLED;
|
|
||||||
console_unlock();
|
|
||||||
nouveau_channel_idle(drm->channel);
|
|
||||||
nvif_object_dtor(&fbcon->twod);
|
|
||||||
nvif_object_dtor(&fbcon->blit);
|
|
||||||
nvif_object_dtor(&fbcon->gdi);
|
|
||||||
nvif_object_dtor(&fbcon->patt);
|
|
||||||
nvif_object_dtor(&fbcon->rop);
|
|
||||||
nvif_object_dtor(&fbcon->clip);
|
|
||||||
nvif_object_dtor(&fbcon->surf2d);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
static void
|
|
||||||
nouveau_fbcon_accel_init(struct drm_device *dev)
|
|
||||||
{
|
|
||||||
struct nouveau_drm *drm = nouveau_drm(dev);
|
|
||||||
struct nouveau_fbdev *fbcon = drm->fbcon;
|
|
||||||
struct fb_info *info = fbcon->helper.info;
|
|
||||||
int ret;
|
|
||||||
|
|
||||||
if (drm->client.device.info.family < NV_DEVICE_INFO_V0_TESLA)
|
|
||||||
ret = nv04_fbcon_accel_init(info);
|
|
||||||
else
|
|
||||||
if (drm->client.device.info.family < NV_DEVICE_INFO_V0_FERMI)
|
|
||||||
ret = nv50_fbcon_accel_init(info);
|
|
||||||
else
|
|
||||||
ret = nvc0_fbcon_accel_init(info);
|
|
||||||
|
|
||||||
if (ret == 0)
|
|
||||||
info->fbops = &nouveau_fbcon_ops;
|
|
||||||
}
|
|
||||||
|
|
||||||
static void
|
|
||||||
nouveau_fbcon_zfill(struct drm_device *dev, struct nouveau_fbdev *fbcon)
|
|
||||||
{
|
|
||||||
struct fb_info *info = fbcon->helper.info;
|
|
||||||
struct fb_fillrect rect;
|
|
||||||
|
|
||||||
/* Clear the entire fbcon. The drm will program every connector
|
|
||||||
* with it's preferred mode. If the sizes differ, one display will
|
|
||||||
* quite likely have garbage around the console.
|
|
||||||
*/
|
|
||||||
rect.dx = rect.dy = 0;
|
|
||||||
rect.width = info->var.xres_virtual;
|
|
||||||
rect.height = info->var.yres_virtual;
|
|
||||||
rect.color = 0;
|
|
||||||
rect.rop = ROP_COPY;
|
|
||||||
info->fbops->fb_fillrect(info, &rect);
|
|
||||||
}
|
|
||||||
|
|
||||||
static int
|
|
||||||
nouveau_fbcon_create(struct drm_fb_helper *helper,
|
|
||||||
struct drm_fb_helper_surface_size *sizes)
|
|
||||||
{
|
|
||||||
struct nouveau_fbdev *fbcon =
|
|
||||||
container_of(helper, struct nouveau_fbdev, helper);
|
|
||||||
struct drm_device *dev = fbcon->helper.dev;
|
|
||||||
struct nouveau_drm *drm = nouveau_drm(dev);
|
|
||||||
struct nvif_device *device = &drm->client.device;
|
|
||||||
struct fb_info *info;
|
|
||||||
struct drm_framebuffer *fb;
|
|
||||||
struct nouveau_channel *chan;
|
|
||||||
struct nouveau_bo *nvbo;
|
|
||||||
struct drm_mode_fb_cmd2 mode_cmd = {};
|
|
||||||
int ret;
|
|
||||||
|
|
||||||
mode_cmd.width = sizes->surface_width;
|
|
||||||
mode_cmd.height = sizes->surface_height;
|
|
||||||
|
|
||||||
mode_cmd.pitches[0] = mode_cmd.width * (sizes->surface_bpp >> 3);
|
|
||||||
mode_cmd.pitches[0] = roundup(mode_cmd.pitches[0], 256);
|
|
||||||
|
|
||||||
mode_cmd.pixel_format = drm_mode_legacy_fb_format(sizes->surface_bpp,
|
|
||||||
sizes->surface_depth);
|
|
||||||
|
|
||||||
ret = nouveau_gem_new(&drm->client, mode_cmd.pitches[0] *
|
|
||||||
mode_cmd.height, 0, NOUVEAU_GEM_DOMAIN_VRAM,
|
|
||||||
0, 0x0000, &nvbo);
|
|
||||||
if (ret) {
|
|
||||||
NV_ERROR(drm, "failed to allocate framebuffer\n");
|
|
||||||
goto out;
|
|
||||||
}
|
|
||||||
|
|
||||||
ret = nouveau_framebuffer_new(dev, &mode_cmd, &nvbo->bo.base, &fb);
|
|
||||||
if (ret)
|
|
||||||
goto out_unref;
|
|
||||||
|
|
||||||
ret = nouveau_bo_pin(nvbo, NOUVEAU_GEM_DOMAIN_VRAM, false);
|
|
||||||
if (ret) {
|
|
||||||
NV_ERROR(drm, "failed to pin fb: %d\n", ret);
|
|
||||||
goto out_unref;
|
|
||||||
}
|
|
||||||
|
|
||||||
ret = nouveau_bo_map(nvbo);
|
|
||||||
if (ret) {
|
|
||||||
NV_ERROR(drm, "failed to map fb: %d\n", ret);
|
|
||||||
goto out_unpin;
|
|
||||||
}
|
|
||||||
|
|
||||||
chan = nouveau_nofbaccel ? NULL : drm->channel;
|
|
||||||
if (chan && device->info.family >= NV_DEVICE_INFO_V0_TESLA) {
|
|
||||||
ret = nouveau_vma_new(nvbo, chan->vmm, &fbcon->vma);
|
|
||||||
if (ret) {
|
|
||||||
NV_ERROR(drm, "failed to map fb into chan: %d\n", ret);
|
|
||||||
chan = NULL;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
info = drm_fb_helper_alloc_info(helper);
|
|
||||||
if (IS_ERR(info)) {
|
|
||||||
ret = PTR_ERR(info);
|
|
||||||
goto out_unlock;
|
|
||||||
}
|
|
||||||
|
|
||||||
/* setup helper */
|
|
||||||
fbcon->helper.fb = fb;
|
|
||||||
|
|
||||||
if (!chan)
|
|
||||||
info->flags = FBINFO_HWACCEL_DISABLED;
|
|
||||||
else
|
|
||||||
info->flags = FBINFO_HWACCEL_COPYAREA |
|
|
||||||
FBINFO_HWACCEL_FILLRECT |
|
|
||||||
FBINFO_HWACCEL_IMAGEBLIT;
|
|
||||||
info->fbops = &nouveau_fbcon_sw_ops;
|
|
||||||
info->fix.smem_start = nvbo->bo.resource->bus.offset;
|
|
||||||
info->fix.smem_len = nvbo->bo.base.size;
|
|
||||||
|
|
||||||
info->screen_base = nvbo_kmap_obj_iovirtual(nvbo);
|
|
||||||
info->screen_size = nvbo->bo.base.size;
|
|
||||||
|
|
||||||
drm_fb_helper_fill_info(info, &fbcon->helper, sizes);
|
|
||||||
|
|
||||||
/* Use default scratch pixmap (info->pixmap.flags = FB_PIXMAP_SYSTEM) */
|
|
||||||
|
|
||||||
if (chan)
|
|
||||||
nouveau_fbcon_accel_init(dev);
|
|
||||||
nouveau_fbcon_zfill(dev, fbcon);
|
|
||||||
|
|
||||||
/* To allow resizeing without swapping buffers */
|
|
||||||
NV_INFO(drm, "allocated %dx%d fb: 0x%llx, bo %p\n",
|
|
||||||
fb->width, fb->height, nvbo->offset, nvbo);
|
|
||||||
|
|
||||||
if (dev_is_pci(dev->dev))
|
|
||||||
vga_switcheroo_client_fb_set(to_pci_dev(dev->dev), info);
|
|
||||||
|
|
||||||
return 0;
|
|
||||||
|
|
||||||
out_unlock:
|
|
||||||
if (chan)
|
|
||||||
nouveau_vma_del(&fbcon->vma);
|
|
||||||
nouveau_bo_unmap(nvbo);
|
|
||||||
out_unpin:
|
|
||||||
nouveau_bo_unpin(nvbo);
|
|
||||||
out_unref:
|
|
||||||
nouveau_bo_ref(NULL, &nvbo);
|
|
||||||
out:
|
|
||||||
return ret;
|
|
||||||
}
|
|
||||||
|
|
||||||
static int
|
|
||||||
nouveau_fbcon_destroy(struct drm_device *dev, struct nouveau_fbdev *fbcon)
|
|
||||||
{
|
|
||||||
struct drm_framebuffer *fb = fbcon->helper.fb;
|
|
||||||
struct nouveau_bo *nvbo;
|
|
||||||
|
|
||||||
drm_fb_helper_unregister_info(&fbcon->helper);
|
|
||||||
drm_fb_helper_fini(&fbcon->helper);
|
|
||||||
|
|
||||||
if (fb && fb->obj[0]) {
|
|
||||||
nvbo = nouveau_gem_object(fb->obj[0]);
|
|
||||||
nouveau_vma_del(&fbcon->vma);
|
|
||||||
nouveau_bo_unmap(nvbo);
|
|
||||||
nouveau_bo_unpin(nvbo);
|
|
||||||
drm_framebuffer_put(fb);
|
|
||||||
}
|
|
||||||
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
void nouveau_fbcon_gpu_lockup(struct fb_info *info)
|
|
||||||
{
|
|
||||||
struct nouveau_fbdev *fbcon = info->par;
|
|
||||||
struct nouveau_drm *drm = nouveau_drm(fbcon->helper.dev);
|
|
||||||
|
|
||||||
NV_ERROR(drm, "GPU lockup - switching to software fbcon\n");
|
|
||||||
info->flags |= FBINFO_HWACCEL_DISABLED;
|
|
||||||
}
|
|
||||||
|
|
||||||
static const struct drm_fb_helper_funcs nouveau_fbcon_helper_funcs = {
|
|
||||||
.fb_probe = nouveau_fbcon_create,
|
|
||||||
};
|
|
||||||
|
|
||||||
static void
|
|
||||||
nouveau_fbcon_set_suspend_work(struct work_struct *work)
|
|
||||||
{
|
|
||||||
struct nouveau_drm *drm = container_of(work, typeof(*drm), fbcon_work);
|
|
||||||
int state = READ_ONCE(drm->fbcon_new_state);
|
|
||||||
|
|
||||||
if (state == FBINFO_STATE_RUNNING)
|
|
||||||
pm_runtime_get_sync(drm->dev->dev);
|
|
||||||
|
|
||||||
console_lock();
|
|
||||||
if (state == FBINFO_STATE_RUNNING)
|
|
||||||
nouveau_fbcon_accel_restore(drm->dev);
|
|
||||||
drm_fb_helper_set_suspend(&drm->fbcon->helper, state);
|
|
||||||
if (state != FBINFO_STATE_RUNNING)
|
|
||||||
nouveau_fbcon_accel_save_disable(drm->dev);
|
|
||||||
console_unlock();
|
|
||||||
|
|
||||||
if (state == FBINFO_STATE_RUNNING) {
|
|
||||||
nouveau_fbcon_hotplug_resume(drm->fbcon);
|
|
||||||
pm_runtime_mark_last_busy(drm->dev->dev);
|
|
||||||
pm_runtime_put_autosuspend(drm->dev->dev);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
void
|
|
||||||
nouveau_fbcon_set_suspend(struct drm_device *dev, int state)
|
|
||||||
{
|
|
||||||
struct nouveau_drm *drm = nouveau_drm(dev);
|
|
||||||
|
|
||||||
if (!drm->fbcon)
|
|
||||||
return;
|
|
||||||
|
|
||||||
drm->fbcon_new_state = state;
|
|
||||||
/* Since runtime resume can happen as a result of a sysfs operation,
|
|
||||||
* it's possible we already have the console locked. So handle fbcon
|
|
||||||
* init/deinit from a seperate work thread
|
|
||||||
*/
|
|
||||||
schedule_work(&drm->fbcon_work);
|
|
||||||
}
|
|
||||||
|
|
||||||
void
|
|
||||||
nouveau_fbcon_output_poll_changed(struct drm_device *dev)
|
|
||||||
{
|
|
||||||
struct nouveau_drm *drm = nouveau_drm(dev);
|
|
||||||
struct nouveau_fbdev *fbcon = drm->fbcon;
|
|
||||||
int ret;
|
|
||||||
|
|
||||||
if (!fbcon)
|
|
||||||
return;
|
|
||||||
|
|
||||||
mutex_lock(&fbcon->hotplug_lock);
|
|
||||||
|
|
||||||
ret = pm_runtime_get(dev->dev);
|
|
||||||
if (ret == 1 || ret == -EACCES) {
|
|
||||||
drm_fb_helper_hotplug_event(&fbcon->helper);
|
|
||||||
|
|
||||||
pm_runtime_mark_last_busy(dev->dev);
|
|
||||||
pm_runtime_put_autosuspend(dev->dev);
|
|
||||||
} else if (ret == 0) {
|
|
||||||
/* If the GPU was already in the process of suspending before
|
|
||||||
* this event happened, then we can't block here as we'll
|
|
||||||
* deadlock the runtime pmops since they wait for us to
|
|
||||||
* finish. So, just defer this event for when we runtime
|
|
||||||
* resume again. It will be handled by fbcon_work.
|
|
||||||
*/
|
|
||||||
NV_DEBUG(drm, "fbcon HPD event deferred until runtime resume\n");
|
|
||||||
fbcon->hotplug_waiting = true;
|
|
||||||
pm_runtime_put_noidle(drm->dev->dev);
|
|
||||||
} else {
|
|
||||||
DRM_WARN("fbcon HPD event lost due to RPM failure: %d\n",
|
|
||||||
ret);
|
|
||||||
}
|
|
||||||
|
|
||||||
mutex_unlock(&fbcon->hotplug_lock);
|
|
||||||
}
|
|
||||||
|
|
||||||
void
|
|
||||||
nouveau_fbcon_hotplug_resume(struct nouveau_fbdev *fbcon)
|
|
||||||
{
|
|
||||||
struct nouveau_drm *drm;
|
|
||||||
|
|
||||||
if (!fbcon)
|
|
||||||
return;
|
|
||||||
drm = nouveau_drm(fbcon->helper.dev);
|
|
||||||
|
|
||||||
mutex_lock(&fbcon->hotplug_lock);
|
|
||||||
if (fbcon->hotplug_waiting) {
|
|
||||||
fbcon->hotplug_waiting = false;
|
|
||||||
|
|
||||||
NV_DEBUG(drm, "Handling deferred fbcon HPD events\n");
|
|
||||||
drm_fb_helper_hotplug_event(&fbcon->helper);
|
|
||||||
}
|
|
||||||
mutex_unlock(&fbcon->hotplug_lock);
|
|
||||||
}
|
|
||||||
|
|
||||||
int
|
|
||||||
nouveau_fbcon_init(struct drm_device *dev)
|
|
||||||
{
|
|
||||||
struct nouveau_drm *drm = nouveau_drm(dev);
|
|
||||||
struct nouveau_fbdev *fbcon;
|
|
||||||
int preferred_bpp = nouveau_fbcon_bpp;
|
|
||||||
int ret;
|
|
||||||
|
|
||||||
if (!dev->mode_config.num_crtc ||
|
|
||||||
(to_pci_dev(dev->dev)->class >> 8) != PCI_CLASS_DISPLAY_VGA)
|
|
||||||
return 0;
|
|
||||||
|
|
||||||
fbcon = kzalloc(sizeof(struct nouveau_fbdev), GFP_KERNEL);
|
|
||||||
if (!fbcon)
|
|
||||||
return -ENOMEM;
|
|
||||||
|
|
||||||
drm->fbcon = fbcon;
|
|
||||||
INIT_WORK(&drm->fbcon_work, nouveau_fbcon_set_suspend_work);
|
|
||||||
mutex_init(&fbcon->hotplug_lock);
|
|
||||||
|
|
||||||
drm_fb_helper_prepare(dev, &fbcon->helper, &nouveau_fbcon_helper_funcs);
|
|
||||||
|
|
||||||
ret = drm_fb_helper_init(dev, &fbcon->helper);
|
|
||||||
if (ret)
|
|
||||||
goto free;
|
|
||||||
|
|
||||||
if (preferred_bpp != 8 && preferred_bpp != 16 && preferred_bpp != 32) {
|
|
||||||
if (drm->client.device.info.ram_size <= 32 * 1024 * 1024)
|
|
||||||
preferred_bpp = 8;
|
|
||||||
else
|
|
||||||
if (drm->client.device.info.ram_size <= 64 * 1024 * 1024)
|
|
||||||
preferred_bpp = 16;
|
|
||||||
else
|
|
||||||
preferred_bpp = 32;
|
|
||||||
}
|
|
||||||
|
|
||||||
/* disable all the possible outputs/crtcs before entering KMS mode */
|
|
||||||
if (!drm_drv_uses_atomic_modeset(dev))
|
|
||||||
drm_helper_disable_unused_functions(dev);
|
|
||||||
|
|
||||||
ret = drm_fb_helper_initial_config(&fbcon->helper, preferred_bpp);
|
|
||||||
if (ret)
|
|
||||||
goto fini;
|
|
||||||
|
|
||||||
if (fbcon->helper.info)
|
|
||||||
fbcon->helper.info->pixmap.buf_align = 4;
|
|
||||||
return 0;
|
|
||||||
|
|
||||||
fini:
|
|
||||||
drm_fb_helper_fini(&fbcon->helper);
|
|
||||||
free:
|
|
||||||
kfree(fbcon);
|
|
||||||
drm->fbcon = NULL;
|
|
||||||
return ret;
|
|
||||||
}
|
|
||||||
|
|
||||||
void
|
|
||||||
nouveau_fbcon_fini(struct drm_device *dev)
|
|
||||||
{
|
|
||||||
struct nouveau_drm *drm = nouveau_drm(dev);
|
|
||||||
|
|
||||||
if (!drm->fbcon)
|
|
||||||
return;
|
|
||||||
|
|
||||||
drm_kms_helper_poll_fini(dev);
|
|
||||||
nouveau_fbcon_accel_fini(dev);
|
|
||||||
nouveau_fbcon_destroy(dev, drm->fbcon);
|
|
||||||
kfree(drm->fbcon);
|
|
||||||
drm->fbcon = NULL;
|
|
||||||
}
|
|
|
@ -173,7 +173,7 @@ int ttm_bo_move_memcpy(struct ttm_buffer_object *bo,
|
||||||
|
|
||||||
clear = src_iter->ops->maps_tt && (!ttm || !ttm_tt_is_populated(ttm));
|
clear = src_iter->ops->maps_tt && (!ttm || !ttm_tt_is_populated(ttm));
|
||||||
if (!(clear && ttm && !(ttm->page_flags & TTM_TT_FLAG_ZERO_ALLOC)))
|
if (!(clear && ttm && !(ttm->page_flags & TTM_TT_FLAG_ZERO_ALLOC)))
|
||||||
ttm_move_memcpy(clear, ttm->num_pages, dst_iter, src_iter);
|
ttm_move_memcpy(clear, PFN_UP(dst_mem->size), dst_iter, src_iter);
|
||||||
|
|
||||||
if (!src_iter->ops->maps_tt)
|
if (!src_iter->ops->maps_tt)
|
||||||
ttm_kmap_iter_linear_io_fini(&_src_iter.io, bdev, src_mem);
|
ttm_kmap_iter_linear_io_fini(&_src_iter.io, bdev, src_mem);
|
||||||
|
|
|
@ -358,10 +358,18 @@ static int virtio_gpu_resource_create_ioctl(struct drm_device *dev, void *data,
|
||||||
drm_gem_object_release(obj);
|
drm_gem_object_release(obj);
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
drm_gem_object_put(obj);
|
|
||||||
|
|
||||||
rc->res_handle = qobj->hw_res_handle; /* similiar to a VM address */
|
rc->res_handle = qobj->hw_res_handle; /* similiar to a VM address */
|
||||||
rc->bo_handle = handle;
|
rc->bo_handle = handle;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* The handle owns the reference now. But we must drop our
|
||||||
|
* remaining reference *after* we no longer need to dereference
|
||||||
|
* the obj. Otherwise userspace could guess the handle and
|
||||||
|
* race closing it from another thread.
|
||||||
|
*/
|
||||||
|
drm_gem_object_put(obj);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -723,11 +731,18 @@ static int virtio_gpu_resource_create_blob_ioctl(struct drm_device *dev,
|
||||||
drm_gem_object_release(obj);
|
drm_gem_object_release(obj);
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
drm_gem_object_put(obj);
|
|
||||||
|
|
||||||
rc_blob->res_handle = bo->hw_res_handle;
|
rc_blob->res_handle = bo->hw_res_handle;
|
||||||
rc_blob->bo_handle = handle;
|
rc_blob->bo_handle = handle;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* The handle owns the reference now. But we must drop our
|
||||||
|
* remaining reference *after* we no longer need to dereference
|
||||||
|
* the obj. Otherwise userspace could guess the handle and
|
||||||
|
* race closing it from another thread.
|
||||||
|
*/
|
||||||
|
drm_gem_object_put(obj);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -254,40 +254,6 @@ void ttm_base_object_unref(struct ttm_base_object **p_base)
|
||||||
kref_put(&base->refcount, ttm_release_base);
|
kref_put(&base->refcount, ttm_release_base);
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
|
||||||
* ttm_base_object_noref_lookup - look up a base object without reference
|
|
||||||
* @tfile: The struct ttm_object_file the object is registered with.
|
|
||||||
* @key: The object handle.
|
|
||||||
*
|
|
||||||
* This function looks up a ttm base object and returns a pointer to it
|
|
||||||
* without refcounting the pointer. The returned pointer is only valid
|
|
||||||
* until ttm_base_object_noref_release() is called, and the object
|
|
||||||
* pointed to by the returned pointer may be doomed. Any persistent usage
|
|
||||||
* of the object requires a refcount to be taken using kref_get_unless_zero().
|
|
||||||
* Iff this function returns successfully it needs to be paired with
|
|
||||||
* ttm_base_object_noref_release() and no sleeping- or scheduling functions
|
|
||||||
* may be called inbetween these function callse.
|
|
||||||
*
|
|
||||||
* Return: A pointer to the object if successful or NULL otherwise.
|
|
||||||
*/
|
|
||||||
struct ttm_base_object *
|
|
||||||
ttm_base_object_noref_lookup(struct ttm_object_file *tfile, uint64_t key)
|
|
||||||
{
|
|
||||||
struct vmwgfx_hash_item *hash;
|
|
||||||
int ret;
|
|
||||||
|
|
||||||
rcu_read_lock();
|
|
||||||
ret = ttm_tfile_find_ref_rcu(tfile, key, &hash);
|
|
||||||
if (ret) {
|
|
||||||
rcu_read_unlock();
|
|
||||||
return NULL;
|
|
||||||
}
|
|
||||||
|
|
||||||
__release(RCU);
|
|
||||||
return hlist_entry(hash, struct ttm_ref_object, hash)->obj;
|
|
||||||
}
|
|
||||||
EXPORT_SYMBOL(ttm_base_object_noref_lookup);
|
|
||||||
|
|
||||||
struct ttm_base_object *ttm_base_object_lookup(struct ttm_object_file *tfile,
|
struct ttm_base_object *ttm_base_object_lookup(struct ttm_object_file *tfile,
|
||||||
uint64_t key)
|
uint64_t key)
|
||||||
{
|
{
|
||||||
|
@ -295,15 +261,16 @@ struct ttm_base_object *ttm_base_object_lookup(struct ttm_object_file *tfile,
|
||||||
struct vmwgfx_hash_item *hash;
|
struct vmwgfx_hash_item *hash;
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
rcu_read_lock();
|
spin_lock(&tfile->lock);
|
||||||
ret = ttm_tfile_find_ref_rcu(tfile, key, &hash);
|
ret = ttm_tfile_find_ref(tfile, key, &hash);
|
||||||
|
|
||||||
if (likely(ret == 0)) {
|
if (likely(ret == 0)) {
|
||||||
base = hlist_entry(hash, struct ttm_ref_object, hash)->obj;
|
base = hlist_entry(hash, struct ttm_ref_object, hash)->obj;
|
||||||
if (!kref_get_unless_zero(&base->refcount))
|
if (!kref_get_unless_zero(&base->refcount))
|
||||||
base = NULL;
|
base = NULL;
|
||||||
}
|
}
|
||||||
rcu_read_unlock();
|
spin_unlock(&tfile->lock);
|
||||||
|
|
||||||
|
|
||||||
return base;
|
return base;
|
||||||
}
|
}
|
||||||
|
|
|
@ -307,18 +307,4 @@ extern int ttm_prime_handle_to_fd(struct ttm_object_file *tfile,
|
||||||
#define ttm_prime_object_kfree(__obj, __prime) \
|
#define ttm_prime_object_kfree(__obj, __prime) \
|
||||||
kfree_rcu(__obj, __prime.base.rhead)
|
kfree_rcu(__obj, __prime.base.rhead)
|
||||||
|
|
||||||
struct ttm_base_object *
|
|
||||||
ttm_base_object_noref_lookup(struct ttm_object_file *tfile, uint64_t key);
|
|
||||||
|
|
||||||
/**
|
|
||||||
* ttm_base_object_noref_release - release a base object pointer looked up
|
|
||||||
* without reference
|
|
||||||
*
|
|
||||||
* Releases a base object pointer looked up with ttm_base_object_noref_lookup().
|
|
||||||
*/
|
|
||||||
static inline void ttm_base_object_noref_release(void)
|
|
||||||
{
|
|
||||||
__acquire(RCU);
|
|
||||||
rcu_read_unlock();
|
|
||||||
}
|
|
||||||
#endif
|
#endif
|
||||||
|
|
|
@ -715,44 +715,6 @@ int vmw_user_bo_lookup(struct drm_file *filp,
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
|
||||||
* vmw_user_bo_noref_lookup - Look up a vmw user buffer object without reference
|
|
||||||
* @filp: The TTM object file the handle is registered with.
|
|
||||||
* @handle: The user buffer object handle.
|
|
||||||
*
|
|
||||||
* This function looks up a struct vmw_bo and returns a pointer to the
|
|
||||||
* struct vmw_buffer_object it derives from without refcounting the pointer.
|
|
||||||
* The returned pointer is only valid until vmw_user_bo_noref_release() is
|
|
||||||
* called, and the object pointed to by the returned pointer may be doomed.
|
|
||||||
* Any persistent usage of the object requires a refcount to be taken using
|
|
||||||
* ttm_bo_reference_unless_doomed(). Iff this function returns successfully it
|
|
||||||
* needs to be paired with vmw_user_bo_noref_release() and no sleeping-
|
|
||||||
* or scheduling functions may be called in between these function calls.
|
|
||||||
*
|
|
||||||
* Return: A struct vmw_buffer_object pointer if successful or negative
|
|
||||||
* error pointer on failure.
|
|
||||||
*/
|
|
||||||
struct vmw_buffer_object *
|
|
||||||
vmw_user_bo_noref_lookup(struct drm_file *filp, u32 handle)
|
|
||||||
{
|
|
||||||
struct vmw_buffer_object *vmw_bo;
|
|
||||||
struct ttm_buffer_object *bo;
|
|
||||||
struct drm_gem_object *gobj = drm_gem_object_lookup(filp, handle);
|
|
||||||
|
|
||||||
if (!gobj) {
|
|
||||||
DRM_ERROR("Invalid buffer object handle 0x%08lx.\n",
|
|
||||||
(unsigned long)handle);
|
|
||||||
return ERR_PTR(-ESRCH);
|
|
||||||
}
|
|
||||||
vmw_bo = gem_to_vmw_bo(gobj);
|
|
||||||
bo = ttm_bo_get_unless_zero(&vmw_bo->base);
|
|
||||||
vmw_bo = vmw_buffer_object(bo);
|
|
||||||
drm_gem_object_put(gobj);
|
|
||||||
|
|
||||||
return vmw_bo;
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* vmw_bo_fence_single - Utility function to fence a single TTM buffer
|
* vmw_bo_fence_single - Utility function to fence a single TTM buffer
|
||||||
* object without unreserving it.
|
* object without unreserving it.
|
||||||
|
|
|
@ -830,12 +830,7 @@ extern int vmw_user_resource_lookup_handle(
|
||||||
uint32_t handle,
|
uint32_t handle,
|
||||||
const struct vmw_user_resource_conv *converter,
|
const struct vmw_user_resource_conv *converter,
|
||||||
struct vmw_resource **p_res);
|
struct vmw_resource **p_res);
|
||||||
extern struct vmw_resource *
|
|
||||||
vmw_user_resource_noref_lookup_handle(struct vmw_private *dev_priv,
|
|
||||||
struct ttm_object_file *tfile,
|
|
||||||
uint32_t handle,
|
|
||||||
const struct vmw_user_resource_conv *
|
|
||||||
converter);
|
|
||||||
extern int vmw_stream_claim_ioctl(struct drm_device *dev, void *data,
|
extern int vmw_stream_claim_ioctl(struct drm_device *dev, void *data,
|
||||||
struct drm_file *file_priv);
|
struct drm_file *file_priv);
|
||||||
extern int vmw_stream_unref_ioctl(struct drm_device *dev, void *data,
|
extern int vmw_stream_unref_ioctl(struct drm_device *dev, void *data,
|
||||||
|
@ -874,15 +869,6 @@ static inline bool vmw_resource_mob_attached(const struct vmw_resource *res)
|
||||||
return !RB_EMPTY_NODE(&res->mob_node);
|
return !RB_EMPTY_NODE(&res->mob_node);
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
|
||||||
* vmw_user_resource_noref_release - release a user resource pointer looked up
|
|
||||||
* without reference
|
|
||||||
*/
|
|
||||||
static inline void vmw_user_resource_noref_release(void)
|
|
||||||
{
|
|
||||||
ttm_base_object_noref_release();
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Buffer object helper functions - vmwgfx_bo.c
|
* Buffer object helper functions - vmwgfx_bo.c
|
||||||
*/
|
*/
|
||||||
|
@ -934,8 +920,6 @@ extern void vmw_bo_unmap(struct vmw_buffer_object *vbo);
|
||||||
extern void vmw_bo_move_notify(struct ttm_buffer_object *bo,
|
extern void vmw_bo_move_notify(struct ttm_buffer_object *bo,
|
||||||
struct ttm_resource *mem);
|
struct ttm_resource *mem);
|
||||||
extern void vmw_bo_swap_notify(struct ttm_buffer_object *bo);
|
extern void vmw_bo_swap_notify(struct ttm_buffer_object *bo);
|
||||||
extern struct vmw_buffer_object *
|
|
||||||
vmw_user_bo_noref_lookup(struct drm_file *filp, u32 handle);
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* vmw_bo_adjust_prio - Adjust the buffer object eviction priority
|
* vmw_bo_adjust_prio - Adjust the buffer object eviction priority
|
||||||
|
|
|
@ -290,20 +290,26 @@ static void vmw_execbuf_rcache_update(struct vmw_res_cache_entry *rcache,
|
||||||
rcache->valid_handle = 0;
|
rcache->valid_handle = 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
enum vmw_val_add_flags {
|
||||||
|
vmw_val_add_flag_none = 0,
|
||||||
|
vmw_val_add_flag_noctx = 1 << 0,
|
||||||
|
};
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* vmw_execbuf_res_noref_val_add - Add a resource described by an unreferenced
|
* vmw_execbuf_res_val_add - Add a resource to the validation list.
|
||||||
* rcu-protected pointer to the validation list.
|
|
||||||
*
|
*
|
||||||
* @sw_context: Pointer to the software context.
|
* @sw_context: Pointer to the software context.
|
||||||
* @res: Unreferenced rcu-protected pointer to the resource.
|
* @res: Unreferenced rcu-protected pointer to the resource.
|
||||||
* @dirty: Whether to change dirty status.
|
* @dirty: Whether to change dirty status.
|
||||||
|
* @flags: specifies whether to use the context or not
|
||||||
*
|
*
|
||||||
* Returns: 0 on success. Negative error code on failure. Typical error codes
|
* Returns: 0 on success. Negative error code on failure. Typical error codes
|
||||||
* are %-EINVAL on inconsistency and %-ESRCH if the resource was doomed.
|
* are %-EINVAL on inconsistency and %-ESRCH if the resource was doomed.
|
||||||
*/
|
*/
|
||||||
static int vmw_execbuf_res_noref_val_add(struct vmw_sw_context *sw_context,
|
static int vmw_execbuf_res_val_add(struct vmw_sw_context *sw_context,
|
||||||
struct vmw_resource *res,
|
struct vmw_resource *res,
|
||||||
u32 dirty)
|
u32 dirty,
|
||||||
|
u32 flags)
|
||||||
{
|
{
|
||||||
struct vmw_private *dev_priv = res->dev_priv;
|
struct vmw_private *dev_priv = res->dev_priv;
|
||||||
int ret;
|
int ret;
|
||||||
|
@ -318,24 +324,30 @@ static int vmw_execbuf_res_noref_val_add(struct vmw_sw_context *sw_context,
|
||||||
if (dirty)
|
if (dirty)
|
||||||
vmw_validation_res_set_dirty(sw_context->ctx,
|
vmw_validation_res_set_dirty(sw_context->ctx,
|
||||||
rcache->private, dirty);
|
rcache->private, dirty);
|
||||||
vmw_user_resource_noref_release();
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
priv_size = vmw_execbuf_res_size(dev_priv, res_type);
|
if ((flags & vmw_val_add_flag_noctx) != 0) {
|
||||||
ret = vmw_validation_add_resource(sw_context->ctx, res, priv_size,
|
ret = vmw_validation_add_resource(sw_context->ctx, res, 0, dirty,
|
||||||
dirty, (void **)&ctx_info,
|
(void **)&ctx_info, NULL);
|
||||||
&first_usage);
|
if (ret)
|
||||||
vmw_user_resource_noref_release();
|
|
||||||
if (ret)
|
|
||||||
return ret;
|
|
||||||
|
|
||||||
if (priv_size && first_usage) {
|
|
||||||
ret = vmw_cmd_ctx_first_setup(dev_priv, sw_context, res,
|
|
||||||
ctx_info);
|
|
||||||
if (ret) {
|
|
||||||
VMW_DEBUG_USER("Failed first usage context setup.\n");
|
|
||||||
return ret;
|
return ret;
|
||||||
|
|
||||||
|
} else {
|
||||||
|
priv_size = vmw_execbuf_res_size(dev_priv, res_type);
|
||||||
|
ret = vmw_validation_add_resource(sw_context->ctx, res, priv_size,
|
||||||
|
dirty, (void **)&ctx_info,
|
||||||
|
&first_usage);
|
||||||
|
if (ret)
|
||||||
|
return ret;
|
||||||
|
|
||||||
|
if (priv_size && first_usage) {
|
||||||
|
ret = vmw_cmd_ctx_first_setup(dev_priv, sw_context, res,
|
||||||
|
ctx_info);
|
||||||
|
if (ret) {
|
||||||
|
VMW_DEBUG_USER("Failed first usage context setup.\n");
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -343,43 +355,6 @@ static int vmw_execbuf_res_noref_val_add(struct vmw_sw_context *sw_context,
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
|
||||||
* vmw_execbuf_res_noctx_val_add - Add a non-context resource to the resource
|
|
||||||
* validation list if it's not already on it
|
|
||||||
*
|
|
||||||
* @sw_context: Pointer to the software context.
|
|
||||||
* @res: Pointer to the resource.
|
|
||||||
* @dirty: Whether to change dirty status.
|
|
||||||
*
|
|
||||||
* Returns: Zero on success. Negative error code on failure.
|
|
||||||
*/
|
|
||||||
static int vmw_execbuf_res_noctx_val_add(struct vmw_sw_context *sw_context,
|
|
||||||
struct vmw_resource *res,
|
|
||||||
u32 dirty)
|
|
||||||
{
|
|
||||||
struct vmw_res_cache_entry *rcache;
|
|
||||||
enum vmw_res_type res_type = vmw_res_type(res);
|
|
||||||
void *ptr;
|
|
||||||
int ret;
|
|
||||||
|
|
||||||
rcache = &sw_context->res_cache[res_type];
|
|
||||||
if (likely(rcache->valid && rcache->res == res)) {
|
|
||||||
if (dirty)
|
|
||||||
vmw_validation_res_set_dirty(sw_context->ctx,
|
|
||||||
rcache->private, dirty);
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
ret = vmw_validation_add_resource(sw_context->ctx, res, 0, dirty,
|
|
||||||
&ptr, NULL);
|
|
||||||
if (ret)
|
|
||||||
return ret;
|
|
||||||
|
|
||||||
vmw_execbuf_rcache_update(rcache, res, ptr);
|
|
||||||
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* vmw_view_res_val_add - Add a view and the surface it's pointing to to the
|
* vmw_view_res_val_add - Add a view and the surface it's pointing to to the
|
||||||
* validation list
|
* validation list
|
||||||
|
@ -398,13 +373,13 @@ static int vmw_view_res_val_add(struct vmw_sw_context *sw_context,
|
||||||
* First add the resource the view is pointing to, otherwise it may be
|
* First add the resource the view is pointing to, otherwise it may be
|
||||||
* swapped out when the view is validated.
|
* swapped out when the view is validated.
|
||||||
*/
|
*/
|
||||||
ret = vmw_execbuf_res_noctx_val_add(sw_context, vmw_view_srf(view),
|
ret = vmw_execbuf_res_val_add(sw_context, vmw_view_srf(view),
|
||||||
vmw_view_dirtying(view));
|
vmw_view_dirtying(view), vmw_val_add_flag_noctx);
|
||||||
if (ret)
|
if (ret)
|
||||||
return ret;
|
return ret;
|
||||||
|
|
||||||
return vmw_execbuf_res_noctx_val_add(sw_context, view,
|
return vmw_execbuf_res_val_add(sw_context, view, VMW_RES_DIRTY_NONE,
|
||||||
VMW_RES_DIRTY_NONE);
|
vmw_val_add_flag_noctx);
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
@ -475,8 +450,9 @@ static int vmw_resource_context_res_add(struct vmw_private *dev_priv,
|
||||||
if (IS_ERR(res))
|
if (IS_ERR(res))
|
||||||
continue;
|
continue;
|
||||||
|
|
||||||
ret = vmw_execbuf_res_noctx_val_add(sw_context, res,
|
ret = vmw_execbuf_res_val_add(sw_context, res,
|
||||||
VMW_RES_DIRTY_SET);
|
VMW_RES_DIRTY_SET,
|
||||||
|
vmw_val_add_flag_noctx);
|
||||||
if (unlikely(ret != 0))
|
if (unlikely(ret != 0))
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
@ -490,9 +466,9 @@ static int vmw_resource_context_res_add(struct vmw_private *dev_priv,
|
||||||
if (vmw_res_type(entry->res) == vmw_res_view)
|
if (vmw_res_type(entry->res) == vmw_res_view)
|
||||||
ret = vmw_view_res_val_add(sw_context, entry->res);
|
ret = vmw_view_res_val_add(sw_context, entry->res);
|
||||||
else
|
else
|
||||||
ret = vmw_execbuf_res_noctx_val_add
|
ret = vmw_execbuf_res_val_add(sw_context, entry->res,
|
||||||
(sw_context, entry->res,
|
vmw_binding_dirtying(entry->bt),
|
||||||
vmw_binding_dirtying(entry->bt));
|
vmw_val_add_flag_noctx);
|
||||||
if (unlikely(ret != 0))
|
if (unlikely(ret != 0))
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
@ -658,7 +634,8 @@ vmw_cmd_res_check(struct vmw_private *dev_priv,
|
||||||
{
|
{
|
||||||
struct vmw_res_cache_entry *rcache = &sw_context->res_cache[res_type];
|
struct vmw_res_cache_entry *rcache = &sw_context->res_cache[res_type];
|
||||||
struct vmw_resource *res;
|
struct vmw_resource *res;
|
||||||
int ret;
|
int ret = 0;
|
||||||
|
bool needs_unref = false;
|
||||||
|
|
||||||
if (p_res)
|
if (p_res)
|
||||||
*p_res = NULL;
|
*p_res = NULL;
|
||||||
|
@ -683,17 +660,18 @@ vmw_cmd_res_check(struct vmw_private *dev_priv,
|
||||||
if (ret)
|
if (ret)
|
||||||
return ret;
|
return ret;
|
||||||
|
|
||||||
res = vmw_user_resource_noref_lookup_handle
|
ret = vmw_user_resource_lookup_handle
|
||||||
(dev_priv, sw_context->fp->tfile, *id_loc, converter);
|
(dev_priv, sw_context->fp->tfile, *id_loc, converter, &res);
|
||||||
if (IS_ERR(res)) {
|
if (ret != 0) {
|
||||||
VMW_DEBUG_USER("Could not find/use resource 0x%08x.\n",
|
VMW_DEBUG_USER("Could not find/use resource 0x%08x.\n",
|
||||||
(unsigned int) *id_loc);
|
(unsigned int) *id_loc);
|
||||||
return PTR_ERR(res);
|
|
||||||
}
|
|
||||||
|
|
||||||
ret = vmw_execbuf_res_noref_val_add(sw_context, res, dirty);
|
|
||||||
if (unlikely(ret != 0))
|
|
||||||
return ret;
|
return ret;
|
||||||
|
}
|
||||||
|
needs_unref = true;
|
||||||
|
|
||||||
|
ret = vmw_execbuf_res_val_add(sw_context, res, dirty, vmw_val_add_flag_none);
|
||||||
|
if (unlikely(ret != 0))
|
||||||
|
goto res_check_done;
|
||||||
|
|
||||||
if (rcache->valid && rcache->res == res) {
|
if (rcache->valid && rcache->res == res) {
|
||||||
rcache->valid_handle = true;
|
rcache->valid_handle = true;
|
||||||
|
@ -708,7 +686,11 @@ vmw_cmd_res_check(struct vmw_private *dev_priv,
|
||||||
if (p_res)
|
if (p_res)
|
||||||
*p_res = res;
|
*p_res = res;
|
||||||
|
|
||||||
return 0;
|
res_check_done:
|
||||||
|
if (needs_unref)
|
||||||
|
vmw_resource_unreference(&res);
|
||||||
|
|
||||||
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
@ -1171,9 +1153,9 @@ static int vmw_translate_mob_ptr(struct vmw_private *dev_priv,
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
vmw_validation_preload_bo(sw_context->ctx);
|
vmw_validation_preload_bo(sw_context->ctx);
|
||||||
vmw_bo = vmw_user_bo_noref_lookup(sw_context->filp, handle);
|
ret = vmw_user_bo_lookup(sw_context->filp, handle, &vmw_bo);
|
||||||
if (IS_ERR(vmw_bo)) {
|
if (ret != 0) {
|
||||||
VMW_DEBUG_USER("Could not find or use MOB buffer.\n");
|
drm_dbg(&dev_priv->drm, "Could not find or use MOB buffer.\n");
|
||||||
return PTR_ERR(vmw_bo);
|
return PTR_ERR(vmw_bo);
|
||||||
}
|
}
|
||||||
ret = vmw_validation_add_bo(sw_context->ctx, vmw_bo, true, false);
|
ret = vmw_validation_add_bo(sw_context->ctx, vmw_bo, true, false);
|
||||||
|
@ -1225,9 +1207,9 @@ static int vmw_translate_guest_ptr(struct vmw_private *dev_priv,
|
||||||
int ret;
|
int ret;
|
||||||
|
|
||||||
vmw_validation_preload_bo(sw_context->ctx);
|
vmw_validation_preload_bo(sw_context->ctx);
|
||||||
vmw_bo = vmw_user_bo_noref_lookup(sw_context->filp, handle);
|
ret = vmw_user_bo_lookup(sw_context->filp, handle, &vmw_bo);
|
||||||
if (IS_ERR(vmw_bo)) {
|
if (ret != 0) {
|
||||||
VMW_DEBUG_USER("Could not find or use GMR region.\n");
|
drm_dbg(&dev_priv->drm, "Could not find or use GMR region.\n");
|
||||||
return PTR_ERR(vmw_bo);
|
return PTR_ERR(vmw_bo);
|
||||||
}
|
}
|
||||||
ret = vmw_validation_add_bo(sw_context->ctx, vmw_bo, false, false);
|
ret = vmw_validation_add_bo(sw_context->ctx, vmw_bo, false, false);
|
||||||
|
@ -2025,8 +2007,9 @@ static int vmw_cmd_set_shader(struct vmw_private *dev_priv,
|
||||||
res = vmw_shader_lookup(vmw_context_res_man(ctx),
|
res = vmw_shader_lookup(vmw_context_res_man(ctx),
|
||||||
cmd->body.shid, cmd->body.type);
|
cmd->body.shid, cmd->body.type);
|
||||||
if (!IS_ERR(res)) {
|
if (!IS_ERR(res)) {
|
||||||
ret = vmw_execbuf_res_noctx_val_add(sw_context, res,
|
ret = vmw_execbuf_res_val_add(sw_context, res,
|
||||||
VMW_RES_DIRTY_NONE);
|
VMW_RES_DIRTY_NONE,
|
||||||
|
vmw_val_add_flag_noctx);
|
||||||
if (unlikely(ret != 0))
|
if (unlikely(ret != 0))
|
||||||
return ret;
|
return ret;
|
||||||
|
|
||||||
|
@ -2273,8 +2256,9 @@ static int vmw_cmd_dx_set_shader(struct vmw_private *dev_priv,
|
||||||
return PTR_ERR(res);
|
return PTR_ERR(res);
|
||||||
}
|
}
|
||||||
|
|
||||||
ret = vmw_execbuf_res_noctx_val_add(sw_context, res,
|
ret = vmw_execbuf_res_val_add(sw_context, res,
|
||||||
VMW_RES_DIRTY_NONE);
|
VMW_RES_DIRTY_NONE,
|
||||||
|
vmw_val_add_flag_noctx);
|
||||||
if (ret)
|
if (ret)
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
@ -2777,8 +2761,8 @@ static int vmw_cmd_dx_bind_shader(struct vmw_private *dev_priv,
|
||||||
return PTR_ERR(res);
|
return PTR_ERR(res);
|
||||||
}
|
}
|
||||||
|
|
||||||
ret = vmw_execbuf_res_noctx_val_add(sw_context, res,
|
ret = vmw_execbuf_res_val_add(sw_context, res, VMW_RES_DIRTY_NONE,
|
||||||
VMW_RES_DIRTY_NONE);
|
vmw_val_add_flag_noctx);
|
||||||
if (ret) {
|
if (ret) {
|
||||||
VMW_DEBUG_USER("Error creating resource validation node.\n");
|
VMW_DEBUG_USER("Error creating resource validation node.\n");
|
||||||
return ret;
|
return ret;
|
||||||
|
@ -3098,8 +3082,8 @@ static int vmw_cmd_dx_bind_streamoutput(struct vmw_private *dev_priv,
|
||||||
|
|
||||||
vmw_dx_streamoutput_set_size(res, cmd->body.sizeInBytes);
|
vmw_dx_streamoutput_set_size(res, cmd->body.sizeInBytes);
|
||||||
|
|
||||||
ret = vmw_execbuf_res_noctx_val_add(sw_context, res,
|
ret = vmw_execbuf_res_val_add(sw_context, res, VMW_RES_DIRTY_NONE,
|
||||||
VMW_RES_DIRTY_NONE);
|
vmw_val_add_flag_noctx);
|
||||||
if (ret) {
|
if (ret) {
|
||||||
DRM_ERROR("Error creating resource validation node.\n");
|
DRM_ERROR("Error creating resource validation node.\n");
|
||||||
return ret;
|
return ret;
|
||||||
|
@ -3148,8 +3132,8 @@ static int vmw_cmd_dx_set_streamoutput(struct vmw_private *dev_priv,
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
ret = vmw_execbuf_res_noctx_val_add(sw_context, res,
|
ret = vmw_execbuf_res_val_add(sw_context, res, VMW_RES_DIRTY_NONE,
|
||||||
VMW_RES_DIRTY_NONE);
|
vmw_val_add_flag_noctx);
|
||||||
if (ret) {
|
if (ret) {
|
||||||
DRM_ERROR("Error creating resource validation node.\n");
|
DRM_ERROR("Error creating resource validation node.\n");
|
||||||
return ret;
|
return ret;
|
||||||
|
@ -4066,22 +4050,26 @@ static int vmw_execbuf_tie_context(struct vmw_private *dev_priv,
|
||||||
if (ret)
|
if (ret)
|
||||||
return ret;
|
return ret;
|
||||||
|
|
||||||
res = vmw_user_resource_noref_lookup_handle
|
ret = vmw_user_resource_lookup_handle
|
||||||
(dev_priv, sw_context->fp->tfile, handle,
|
(dev_priv, sw_context->fp->tfile, handle,
|
||||||
user_context_converter);
|
user_context_converter, &res);
|
||||||
if (IS_ERR(res)) {
|
if (ret != 0) {
|
||||||
VMW_DEBUG_USER("Could not find or user DX context 0x%08x.\n",
|
VMW_DEBUG_USER("Could not find or user DX context 0x%08x.\n",
|
||||||
(unsigned int) handle);
|
(unsigned int) handle);
|
||||||
return PTR_ERR(res);
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
ret = vmw_execbuf_res_noref_val_add(sw_context, res, VMW_RES_DIRTY_SET);
|
ret = vmw_execbuf_res_val_add(sw_context, res, VMW_RES_DIRTY_SET,
|
||||||
if (unlikely(ret != 0))
|
vmw_val_add_flag_none);
|
||||||
|
if (unlikely(ret != 0)) {
|
||||||
|
vmw_resource_unreference(&res);
|
||||||
return ret;
|
return ret;
|
||||||
|
}
|
||||||
|
|
||||||
sw_context->dx_ctx_node = vmw_execbuf_info_from_res(sw_context, res);
|
sw_context->dx_ctx_node = vmw_execbuf_info_from_res(sw_context, res);
|
||||||
sw_context->man = vmw_context_res_man(res);
|
sw_context->man = vmw_context_res_man(res);
|
||||||
|
|
||||||
|
vmw_resource_unreference(&res);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -281,39 +281,6 @@ out_bad_resource:
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
|
||||||
* vmw_user_resource_noref_lookup_handle - lookup a struct resource from a
|
|
||||||
* TTM user-space handle and perform basic type checks
|
|
||||||
*
|
|
||||||
* @dev_priv: Pointer to a device private struct
|
|
||||||
* @tfile: Pointer to a struct ttm_object_file identifying the caller
|
|
||||||
* @handle: The TTM user-space handle
|
|
||||||
* @converter: Pointer to an object describing the resource type
|
|
||||||
*
|
|
||||||
* If the handle can't be found or is associated with an incorrect resource
|
|
||||||
* type, -EINVAL will be returned.
|
|
||||||
*/
|
|
||||||
struct vmw_resource *
|
|
||||||
vmw_user_resource_noref_lookup_handle(struct vmw_private *dev_priv,
|
|
||||||
struct ttm_object_file *tfile,
|
|
||||||
uint32_t handle,
|
|
||||||
const struct vmw_user_resource_conv
|
|
||||||
*converter)
|
|
||||||
{
|
|
||||||
struct ttm_base_object *base;
|
|
||||||
|
|
||||||
base = ttm_base_object_noref_lookup(tfile, handle);
|
|
||||||
if (!base)
|
|
||||||
return ERR_PTR(-ESRCH);
|
|
||||||
|
|
||||||
if (unlikely(ttm_base_object_type(base) != converter->object_type)) {
|
|
||||||
ttm_base_object_noref_release();
|
|
||||||
return ERR_PTR(-EINVAL);
|
|
||||||
}
|
|
||||||
|
|
||||||
return converter->base_obj_to_res(base);
|
|
||||||
}
|
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* Helper function that looks either a surface or bo.
|
* Helper function that looks either a surface or bo.
|
||||||
*
|
*
|
||||||
|
|
Загрузка…
Ссылка в новой задаче