tcp: don't abort splice() after small transfers
TCP coalescing added a regression in splice(socket->pipe) performance, for some workloads because of the way tcp_read_sock() is implemented. The reason for this is the break when (offset + 1 != skb->len). As we released the socket lock, this condition is possible if TCP stack added a fragment to the skb, which can happen with TCP coalescing. So let's go back to the beginning of the loop when this happens, to give a chance to splice more frags per system call. Doing so fixes the issue and makes GRO 10% faster than LRO on CPU-bound splice() workloads instead of the opposite. Signed-off-by: Willy Tarreau <w@1wt.eu> Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
Родитель
077b393d05
Коммит
02275a2ee7
|
@ -1494,15 +1494,19 @@ int tcp_read_sock(struct sock *sk, read_descriptor_t *desc,
|
||||||
copied += used;
|
copied += used;
|
||||||
offset += used;
|
offset += used;
|
||||||
}
|
}
|
||||||
/*
|
/* If recv_actor drops the lock (e.g. TCP splice
|
||||||
* If recv_actor drops the lock (e.g. TCP splice
|
|
||||||
* receive) the skb pointer might be invalid when
|
* receive) the skb pointer might be invalid when
|
||||||
* getting here: tcp_collapse might have deleted it
|
* getting here: tcp_collapse might have deleted it
|
||||||
* while aggregating skbs from the socket queue.
|
* while aggregating skbs from the socket queue.
|
||||||
*/
|
*/
|
||||||
skb = tcp_recv_skb(sk, seq-1, &offset);
|
skb = tcp_recv_skb(sk, seq - 1, &offset);
|
||||||
if (!skb || (offset+1 != skb->len))
|
if (!skb)
|
||||||
break;
|
break;
|
||||||
|
/* TCP coalescing might have appended data to the skb.
|
||||||
|
* Try to splice more frags
|
||||||
|
*/
|
||||||
|
if (offset + 1 != skb->len)
|
||||||
|
continue;
|
||||||
}
|
}
|
||||||
if (tcp_hdr(skb)->fin) {
|
if (tcp_hdr(skb)->fin) {
|
||||||
sk_eat_skb(sk, skb, false);
|
sk_eat_skb(sk, skb, false);
|
||||||
|
|
Загрузка…
Ссылка в новой задаче