net/mlx4: MLX4_TX_BOUNCE_BUFFER_SIZE depends on MAX_SKB_FRAGS

Google production kernel has increased MAX_SKB_FRAGS to 45
for BIG-TCP rollout.

Unfortunately mlx4 TX bounce buffer is not big enough whenever
an skb has up to 45 page fragments.

This can happen often with TCP TX zero copy, as one frag usually
holds 4096 bytes of payload (order-0 page).

Tested:
 Kernel built with MAX_SKB_FRAGS=45
 ip link set dev eth0 gso_max_size 185000
 netperf -t TCP_SENDFILE

I made sure that "ethtool -G eth0 tx 64" was properly working,
ring->full_size being set to 15.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Wei Wang <weiwan@google.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
This commit is contained in:
Eric Dumazet 2022-12-07 14:12:36 +00:00 коммит произвёл Jakub Kicinski
Родитель 35f31ff0c0
Коммит 26782aad00
1 изменённых файлов: 12 добавлений и 4 удалений

Просмотреть файл

@ -89,8 +89,18 @@
#define MLX4_EN_FILTER_HASH_SHIFT 4
#define MLX4_EN_FILTER_EXPIRY_QUOTA 60
/* Typical TSO descriptor with 16 gather entries is 352 bytes... */
#define MLX4_TX_BOUNCE_BUFFER_SIZE 512
#define CTRL_SIZE sizeof(struct mlx4_wqe_ctrl_seg)
#define DS_SIZE sizeof(struct mlx4_wqe_data_seg)
/* Maximal size of the bounce buffer:
* 256 bytes for LSO headers.
* CTRL_SIZE for control desc.
* DS_SIZE if skb->head contains some payload.
* MAX_SKB_FRAGS frags.
*/
#define MLX4_TX_BOUNCE_BUFFER_SIZE \
ALIGN(256 + CTRL_SIZE + DS_SIZE + MAX_SKB_FRAGS * DS_SIZE, TXBB_SIZE)
#define MLX4_MAX_DESC_TXBBS (MLX4_TX_BOUNCE_BUFFER_SIZE / TXBB_SIZE)
/*
@ -217,9 +227,7 @@ struct mlx4_en_tx_info {
#define MLX4_EN_BIT_DESC_OWN 0x80000000
#define CTRL_SIZE sizeof(struct mlx4_wqe_ctrl_seg)
#define MLX4_EN_MEMTYPE_PAD 0x100
#define DS_SIZE sizeof(struct mlx4_wqe_data_seg)
struct mlx4_en_tx_desc {