RDMA/mlx5: Prevent prefetch from racing with implicit destruction

Prefetch work in mlx5_ib_prefetch_mr_work can be queued and able to run
concurrently with destruction of the implicit MR. The num_deferred_work
was intended to serialize this, but there is a race:

       CPU0                                          CPU1

    mlx5_ib_free_implicit_mr()
      xa_erase(odp_mkeys)
      synchronize_srcu()
      __xa_erase(implicit_children)
                                      mlx5_ib_prefetch_mr_work()
                                        pagefault_mr()
                                         pagefault_implicit_mr()
                                          implicit_get_child_mr()
                                           xa_cmpxchg()
                                        atomic_dec_and_test(num_deferred_mr)
      wait_event(imr->q_deferred_work)
      ib_umem_odp_release(odp_imr)
        kfree(odp_imr)

At this point in mlx5_ib_free_implicit_mr() the implicit_children list is
supposed to be empty forever so that destroy_unused_implicit_child_mr()
and related are not and will not be running.

Since it is not empty the destroy_unused_implicit_child_mr() flow ends up
touching deallocated memory as mlx5_ib_free_implicit_mr() already tore down the
imr parent.

The solution is to flush out the prefetch wq by driving num_deferred_work
to zero after creation of new prefetch work is blocked.

Fixes: 5256edcb98 ("RDMA/mlx5: Rework implicit ODP destroy")
Link: https://lore.kernel.org/r/20200719065435.130722-1-leon@kernel.org
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
This commit is contained in:
Jason Gunthorpe 2020-07-19 09:54:35 +03:00
Родитель 87c4c774cb
Коммит a862192e92
1 изменённых файлов: 19 добавлений и 3 удалений

Просмотреть файл

@ -601,6 +601,23 @@ void mlx5_ib_free_implicit_mr(struct mlx5_ib_mr *imr)
*/ */
synchronize_srcu(&dev->odp_srcu); synchronize_srcu(&dev->odp_srcu);
/*
* All work on the prefetch list must be completed, xa_erase() prevented
* new work from being created.
*/
wait_event(imr->q_deferred_work, !atomic_read(&imr->num_deferred_work));
/*
* At this point it is forbidden for any other thread to enter
* pagefault_mr() on this imr. It is already forbidden to call
* pagefault_mr() on an implicit child. Due to this additions to
* implicit_children are prevented.
*/
/*
* Block destroy_unused_implicit_child_mr() from incrementing
* num_deferred_work.
*/
xa_lock(&imr->implicit_children); xa_lock(&imr->implicit_children);
xa_for_each (&imr->implicit_children, idx, mtt) { xa_for_each (&imr->implicit_children, idx, mtt) {
__xa_erase(&imr->implicit_children, idx); __xa_erase(&imr->implicit_children, idx);
@ -609,9 +626,8 @@ void mlx5_ib_free_implicit_mr(struct mlx5_ib_mr *imr)
xa_unlock(&imr->implicit_children); xa_unlock(&imr->implicit_children);
/* /*
* num_deferred_work can only be incremented inside the odp_srcu, or * Wait for any concurrent destroy_unused_implicit_child_mr() to
* under xa_lock while the child is in the xarray. Thus at this point * complete.
* it is only decreasing, and all work holding it is now on the wq.
*/ */
wait_event(imr->q_deferred_work, !atomic_read(&imr->num_deferred_work)); wait_event(imr->q_deferred_work, !atomic_read(&imr->num_deferred_work));