BPF List
 help / color / mirror / Atom feed
* [PATCH] bpf, sockmap: Use sk_rmem_schedule in bpf_tcp_ingress
@ 2024-07-04  4:39 zhengguoyong
  2024-07-04 14:38 ` Michal Kubiak
  0 siblings, 1 reply; 2+ messages in thread
From: zhengguoyong @ 2024-07-04  4:39 UTC (permalink / raw)
  To: john.fastabend, jakub; +Cc: netdev, bpf

In sockmap redirect mode, when msg send to redir sk,
we use sk_wrmem_schedule to check memory is enough,

    tcp_bpf_sendmsg
        tcp_bpf_send_verdict
            bpf_tcp_ingress
                sk_wmem_schedule

but in bpf_tcp_ingress, the parameter sk means receiver,
so use sk_rmem_schedule here is more suitability.

Signed-off-by: GuoYong Zheng <zhenggy@chinatelecom.cn>
---
 net/ipv4/tcp_bpf.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c
index 53b0d62..88c58b5 100644
--- a/net/ipv4/tcp_bpf.c
+++ b/net/ipv4/tcp_bpf.c
@@ -49,7 +49,7 @@ static int bpf_tcp_ingress(struct sock *sk, struct sk_psock *psock,
 		sge = sk_msg_elem(msg, i);
 		size = (apply && apply_bytes < sge->length) ?
 			apply_bytes : sge->length;
-		if (!sk_wmem_schedule(sk, size)) {
+		if (!sk_rmem_schedule(sk, size)) {
 			if (!copied)
 				ret = -ENOMEM;
 			break;
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2024-07-04 14:38 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-07-04  4:39 [PATCH] bpf, sockmap: Use sk_rmem_schedule in bpf_tcp_ingress zhengguoyong
2024-07-04 14:38 ` Michal Kubiak

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox