WSL2-Linux-Kernel/drivers/infiniband/ulp/rtrs
Li Zhijian 01f6f867ad RDMA/rtrs: Fix rxe_dealloc_pd warning
[ Upstream commit 9c29c8c7df ]

In current design:
1. PD and clt_path->s.dev are shared among connections.
2. every con[n]'s cleanup phase will call destroy_con_cq_qp()
3. clt_path->s.dev will be always decreased in destroy_con_cq_qp(), and
   when clt_path->s.dev become zero, it will destroy PD.
4. when con[1] failed to create, con[1] will not take clt_path->s.dev,
   but it try to decreased clt_path->s.dev

So, in case create_cm(con[0]) succeeds but create_cm(con[1]) fails,
destroy_con_cq_qp(con[1]) will be called first which will destroy the PD
while this PD is still taken by con[0].

Here, we refactor the error path of create_cm() and init_conns(), so that
we do the cleanup in the order they are created.

The warning occurs when destroying RXE PD whose reference count is not
zero.

 rnbd_client L597: Mapping device /dev/nvme0n1 on session client, (access_mode: rw, nr_poll_queues: 0)
 ------------[ cut here ]------------
 WARNING: CPU: 0 PID: 26407 at drivers/infiniband/sw/rxe/rxe_pool.c:256 __rxe_cleanup+0x13a/0x170 [rdma_rxe]
 Modules linked in: rpcrdma rdma_ucm ib_iser rnbd_client libiscsi rtrs_client scsi_transport_iscsi rtrs_core rdma_cm iw_cm ib_cm crc32_generic rdma_rxe udp_tunnel ib_uverbs ib_core kmem device_dax nd_pmem dax_pmem nd_vme crc32c_intel fuse nvme_core nfit libnvdimm dm_multipath scsi_dh_rdac scsi_dh_emc scsi_dh_alua dm_mirror dm_region_hash dm_log dm_mod
 CPU: 0 PID: 26407 Comm: rnbd-client.sh Kdump: loaded Not tainted 6.2.0-rc6-roce-flush+ #53
 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.0-0-gd239552ce722-prebuilt.qemu.org 04/01/2014
 RIP: 0010:__rxe_cleanup+0x13a/0x170 [rdma_rxe]
 Code: 45 84 e4 0f 84 5a ff ff ff 48 89 ef e8 5f 18 71 f9 84 c0 75 90 be c8 00 00 00 48 89 ef e8 be 89 1f fa 85 c0 0f 85 7b ff ff ff <0f> 0b 41 bc ea ff ff ff e9 71 ff ff ff e8 84 7f 1f fa e9 d0 fe ff
 RSP: 0018:ffffb09880b6f5f0 EFLAGS: 00010246
 RAX: 0000000000000000 RBX: ffff99401f15d6a8 RCX: 0000000000000000
 RDX: 0000000000000001 RSI: ffffffffbac8234b RDI: 00000000ffffffff
 RBP: ffff99401f15d6d0 R08: 0000000000000001 R09: 0000000000000001
 R10: 0000000000002d82 R11: 0000000000000000 R12: 0000000000000001
 R13: ffff994101eff208 R14: ffffb09880b6f6a0 R15: 00000000fffffe00
 FS:  00007fe113904740(0000) GS:ffff99413bc00000(0000) knlGS:0000000000000000
 CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
 CR2: 00007ff6cde656c8 CR3: 000000001f108004 CR4: 00000000001706f0
 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
 Call Trace:
  <TASK>
  rxe_dealloc_pd+0x16/0x20 [rdma_rxe]
  ib_dealloc_pd_user+0x4b/0x80 [ib_core]
  rtrs_ib_dev_put+0x79/0xd0 [rtrs_core]
  destroy_con_cq_qp+0x8a/0xa0 [rtrs_client]
  init_path+0x1e7/0x9a0 [rtrs_client]
  ? __pfx_autoremove_wake_function+0x10/0x10
  ? lock_is_held_type+0xd7/0x130
  ? rcu_read_lock_sched_held+0x43/0x80
  ? pcpu_alloc+0x3dd/0x7d0
  ? rtrs_clt_init_stats+0x18/0x40 [rtrs_client]
  rtrs_clt_open+0x24f/0x5a0 [rtrs_client]
  ? __pfx_rnbd_clt_link_ev+0x10/0x10 [rnbd_client]
  rnbd_clt_map_device+0x6a5/0xe10 [rnbd_client]

Fixes: 6a98d71dae ("RDMA/rtrs: client: main functionality")
Link: https://lore.kernel.org/r/1682384563-2-4-git-send-email-lizhijian@fujitsu.com
Signed-off-by: Li Zhijian <lizhijian@fujitsu.com>
Acked-by: Jack Wang <jinpu.wang@ionos.com>
Tested-by: Jack Wang <jinpu.wang@ionos.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
2023-06-21 15:59:16 +02:00
..
Kconfig RDMA/rtrs: include client and server modules into kernel compilation 2020-05-17 18:57:15 -03:00
Makefile RDMA/rtrs: include client and server modules into kernel compilation 2020-05-17 18:57:15 -03:00
README RDMA/rtrs: a bit of documentation 2020-05-17 18:57:15 -03:00
rtrs-clt-stats.c RDMA/rtrs-clt: Rename rtrs_clt_sess to rtrs_clt_path 2022-08-17 14:23:54 +02:00
rtrs-clt-sysfs.c RDMA/rtrs-clt: Rename rtrs_clt_sess to rtrs_clt_path 2022-08-17 14:23:54 +02:00
rtrs-clt.c RDMA/rtrs: Fix rxe_dealloc_pd warning 2023-06-21 15:59:16 +02:00
rtrs-clt.h RDMA/rtrs-clt: Rename rtrs_clt_sess to rtrs_clt_path 2022-08-17 14:23:54 +02:00
rtrs-log.h
rtrs-pri.h RDMA/rtrs-clt: Rename rtrs_clt_sess to rtrs_clt_path 2022-08-17 14:23:54 +02:00
rtrs-srv-stats.c RDMA/rtrs-srv: convert scnprintf to sysfs_emit 2021-05-28 20:52:58 -03:00
rtrs-srv-sysfs.c RDMA/rtrs-srv: Rename rtrs_srv_sess to rtrs_srv_path 2022-08-17 14:23:54 +02:00
rtrs-srv.c RDMA/rtrs-srv: Pass the correct number of entries for dma mapped SGL 2022-09-15 11:30:03 +02:00
rtrs-srv.h RDMA/rtrs-srv: Rename rtrs_srv_sess to rtrs_srv_path 2022-08-17 14:23:54 +02:00
rtrs.c RDMA/rtrs: Fix the last iu->buf leak in err path 2023-06-21 15:59:16 +02:00
rtrs.h RDMA/rtrs-clt: Rename rtrs_clt_sess to rtrs_clt_path 2022-08-17 14:23:54 +02:00

README

****************************
RDMA Transport (RTRS)
****************************

RTRS (RDMA Transport) is a reliable high speed transport library
which provides support to establish optimal number of connections
between client and server machines using RDMA (InfiniBand, RoCE, iWarp)
transport. It is optimized to transfer (read/write) IO blocks.

In its core interface it follows the BIO semantics of providing the
possibility to either write data from an sg list to the remote side
or to request ("read") data transfer from the remote side into a given
sg list.

RTRS provides I/O fail-over and load-balancing capabilities by using
multipath I/O (see "add_path" and "mp_policy" configuration entries in
Documentation/ABI/testing/sysfs-class-rtrs-client).

RTRS is used by the RNBD (RDMA Network Block Device) modules.

==================
Transport protocol
==================

Overview
--------
An established connection between a client and a server is called rtrs
session. A session is associated with a set of memory chunks reserved on the
server side for a given client for rdma transfer. A session
consists of multiple paths, each representing a separate physical link
between client and server. Those are used for load balancing and failover.
Each path consists of as many connections (QPs) as there are cpus on
the client.

When processing an incoming write or read request, rtrs client uses memory
chunks reserved for him on the server side. Their number, size and addresses
need to be exchanged between client and server during the connection
establishment phase. Apart from the memory related information client needs to
inform the server about the session name and identify each path and connection
individually.

On an established session client sends to server write or read messages.
Server uses immediate field to tell the client which request is being
acknowledged and for errno. Client uses immediate field to tell the server
which of the memory chunks has been accessed and at which offset the message
can be found.

Module parameter always_invalidate is introduced for the security problem
discussed in LPC RDMA MC 2019. When always_invalidate=Y, on the server side we
invalidate each rdma buffer before we hand it over to RNBD server and
then pass it to the block layer. A new rkey is generated and registered for the
buffer after it returns back from the block layer and RNBD server.
The new rkey is sent back to the client along with the IO result.
The procedure is the default behaviour of the driver. This invalidation and
registration on each IO causes performance drop of up to 20%. A user of the
driver may choose to load the modules with this mechanism switched off
(always_invalidate=N), if he understands and can take the risk of a malicious
client being able to corrupt memory of a server it is connected to. This might
be a reasonable option in a scenario where all the clients and all the servers
are located within a secure datacenter.


Connection establishment
------------------------

1. Client starts establishing connections belonging to a path of a session one
by one via attaching RTRS_MSG_CON_REQ messages to the rdma_connect requests.
Those include uuid of the session and uuid of the path to be
established. They are used by the server to find a persisting session/path or
to create a new one when necessary. The message also contains the protocol
version and magic for compatibility, total number of connections per session
(as many as cpus on the client), the id of the current connection and
the reconnect counter, which is used to resolve the situations where
client is trying to reconnect a path, while server is still destroying the old
one.

2. Server accepts the connection requests one by one and attaches
RTRS_MSG_CONN_RSP messages to the rdma_accept. Apart from magic and
protocol version, the messages include error code, queue depth supported by
the server (number of memory chunks which are going to be allocated for that
session) and the maximum size of one io, RTRS_MSG_NEW_RKEY_F flags is set
when always_invalidate=Y.

3. After all connections of a path are established client sends to server the
RTRS_MSG_INFO_REQ message, containing the name of the session. This message
requests the address information from the server.

4. Server replies to the session info request message with RTRS_MSG_INFO_RSP,
which contains the addresses and keys of the RDMA buffers allocated for that
session.

5. Session becomes connected after all paths to be established are connected
(i.e. steps 1-4 finished for all paths requested for a session)

6. Server and client exchange periodically heartbeat messages (empty rdma
messages with an immediate field) which are used to detect a crash on remote
side or network outage in an absence of IO.

7. On any RDMA related error or in the case of a heartbeat timeout, the
corresponding path is disconnected, all the inflight IO are failed over to a
healthy path, if any, and the reconnect mechanism is triggered.

CLT                                     SRV
*for each connection belonging to a path and for each path:
RTRS_MSG_CON_REQ  ------------------->
                   <------------------- RTRS_MSG_CON_RSP
...
*after all connections are established:
RTRS_MSG_INFO_REQ ------------------->
                   <------------------- RTRS_MSG_INFO_RSP
*heartbeat is started from both sides:
                   -------------------> [RTRS_HB_MSG_IMM]
[RTRS_HB_MSG_ACK] <-------------------
[RTRS_HB_MSG_IMM] <-------------------
                   -------------------> [RTRS_HB_MSG_ACK]

IO path
-------

* Write (always_invalidate=N) *

1. When processing a write request client selects one of the memory chunks
on the server side and rdma writes there the user data, user header and the
RTRS_MSG_RDMA_WRITE message. Apart from the type (write), the message only
contains size of the user header. The client tells the server which chunk has
been accessed and at what offset the RTRS_MSG_RDMA_WRITE can be found by
using the IMM field.

2. When confirming a write request server sends an "empty" rdma message with
an immediate field. The 32 bit field is used to specify the outstanding
inflight IO and for the error code.

CLT                                                          SRV
usr_data + usr_hdr + rtrs_msg_rdma_write -----------------> [RTRS_IO_REQ_IMM]
[RTRS_IO_RSP_IMM]                        <----------------- (id + errno)

* Write (always_invalidate=Y) *

1. When processing a write request client selects one of the memory chunks
on the server side and rdma writes there the user data, user header and the
RTRS_MSG_RDMA_WRITE message. Apart from the type (write), the message only
contains size of the user header. The client tells the server which chunk has
been accessed and at what offset the RTRS_MSG_RDMA_WRITE can be found by
using the IMM field, Server invalidate rkey associated to the memory chunks
first, when it finishes, pass the IO to RNBD server module.

2. When confirming a write request server sends an "empty" rdma message with
an immediate field. The 32 bit field is used to specify the outstanding
inflight IO and for the error code. The new rkey is sent back using
SEND_WITH_IMM WR, client When it recived new rkey message, it validates
the message and finished IO after update rkey for the rbuffer, then post
back the recv buffer for later use.

CLT                                                          SRV
usr_data + usr_hdr + rtrs_msg_rdma_write -----------------> [RTRS_IO_REQ_IMM]
[RTRS_MSG_RKEY_RSP]                     <----------------- (RTRS_MSG_RKEY_RSP)
[RTRS_IO_RSP_IMM]                        <----------------- (id + errno)


* Read (always_invalidate=N)*

1. When processing a read request client selects one of the memory chunks
on the server side and rdma writes there the user header and the
RTRS_MSG_RDMA_READ message. This message contains the type (read), size of
the user header, flags (specifying if memory invalidation is necessary) and the
list of addresses along with keys for the data to be read into.

2. When confirming a read request server transfers the requested data first,
attaches an invalidation message if requested and finally an "empty" rdma
message with an immediate field. The 32 bit field is used to specify the
outstanding inflight IO and the error code.

CLT                                           SRV
usr_hdr + rtrs_msg_rdma_read --------------> [RTRS_IO_REQ_IMM]
[RTRS_IO_RSP_IMM]            <-------------- usr_data + (id + errno)
or in case client requested invalidation:
[RTRS_IO_RSP_IMM_W_INV]      <-------------- usr_data + (INV) + (id + errno)

* Read (always_invalidate=Y)*

1. When processing a read request client selects one of the memory chunks
on the server side and rdma writes there the user header and the
RTRS_MSG_RDMA_READ message. This message contains the type (read), size of
the user header, flags (specifying if memory invalidation is necessary) and the
list of addresses along with keys for the data to be read into.
Server invalidate rkey associated to the memory chunks first, when it finishes,
passes the IO to RNBD server module.

2. When confirming a read request server transfers the requested data first,
attaches an invalidation message if requested and finally an "empty" rdma
message with an immediate field. The 32 bit field is used to specify the
outstanding inflight IO and the error code. The new rkey is sent back using
SEND_WITH_IMM WR, client When it recived new rkey message, it validates
the message and finished IO after update rkey for the rbuffer, then post
back the recv buffer for later use.

CLT                                           SRV
usr_hdr + rtrs_msg_rdma_read --------------> [RTRS_IO_REQ_IMM]
[RTRS_IO_RSP_IMM]            <-------------- usr_data + (id + errno)
[RTRS_MSG_RKEY_RSP]	     <----------------- (RTRS_MSG_RKEY_RSP)
or in case client requested invalidation:
[RTRS_IO_RSP_IMM_W_INV]      <-------------- usr_data + (INV) + (id + errno)
=========================================
Contributors List(in alphabetical order)
=========================================
Danil Kipnis <danil.kipnis@profitbricks.com>
Fabian Holler <mail@fholler.de>
Guoqing Jiang <guoqing.jiang@cloud.ionos.com>
Jack Wang <jinpu.wang@profitbricks.com>
Kleber Souza <kleber.souza@profitbricks.com>
Lutz Pogrell <lutz.pogrell@cloud.ionos.com>
Milind Dumbare <Milind.dumbare@gmail.com>
Roman Penyaev <roman.penyaev@profitbricks.com>