BPF List
 help / color / mirror / Atom feed
* [PATCH] bpf, sockmap: Use sk_rmem_schedule in bpf_tcp_ingress
@ 2024-07-04  4:39 zhengguoyong
  2024-07-04 14:38 ` Michal Kubiak
  0 siblings, 1 reply; 2+ messages in thread
From: zhengguoyong @ 2024-07-04  4:39 UTC (permalink / raw)
  To: john.fastabend, jakub; +Cc: netdev, bpf

In sockmap redirect mode, when msg send to redir sk,
we use sk_wrmem_schedule to check memory is enough,

    tcp_bpf_sendmsg
        tcp_bpf_send_verdict
            bpf_tcp_ingress
                sk_wmem_schedule

but in bpf_tcp_ingress, the parameter sk means receiver,
so use sk_rmem_schedule here is more suitability.

Signed-off-by: GuoYong Zheng <zhenggy@chinatelecom.cn>
---
 net/ipv4/tcp_bpf.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c
index 53b0d62..88c58b5 100644
--- a/net/ipv4/tcp_bpf.c
+++ b/net/ipv4/tcp_bpf.c
@@ -49,7 +49,7 @@ static int bpf_tcp_ingress(struct sock *sk, struct sk_psock *psock,
 		sge = sk_msg_elem(msg, i);
 		size = (apply && apply_bytes < sge->length) ?
 			apply_bytes : sge->length;
-		if (!sk_wmem_schedule(sk, size)) {
+		if (!sk_rmem_schedule(sk, size)) {
 			if (!copied)
 				ret = -ENOMEM;
 			break;
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH] bpf, sockmap: Use sk_rmem_schedule in bpf_tcp_ingress
  2024-07-04  4:39 [PATCH] bpf, sockmap: Use sk_rmem_schedule in bpf_tcp_ingress zhengguoyong
@ 2024-07-04 14:38 ` Michal Kubiak
  0 siblings, 0 replies; 2+ messages in thread
From: Michal Kubiak @ 2024-07-04 14:38 UTC (permalink / raw)
  To: zhengguoyong; +Cc: john.fastabend, jakub, netdev, bpf

On Thu, Jul 04, 2024 at 12:39:01PM +0800, zhengguoyong wrote:
> In sockmap redirect mode, when msg send to redir sk,
> we use sk_wrmem_schedule to check memory is enough,
> 
>     tcp_bpf_sendmsg
>         tcp_bpf_send_verdict
>             bpf_tcp_ingress
>                 sk_wmem_schedule
> 
> but in bpf_tcp_ingress, the parameter sk means receiver,
> so use sk_rmem_schedule here is more suitability.
> 
> Signed-off-by: GuoYong Zheng <zhenggy@chinatelecom.cn>
> ---
>  net/ipv4/tcp_bpf.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c
> index 53b0d62..88c58b5 100644
> --- a/net/ipv4/tcp_bpf.c
> +++ b/net/ipv4/tcp_bpf.c
> @@ -49,7 +49,7 @@ static int bpf_tcp_ingress(struct sock *sk, struct sk_psock *psock,
>  		sge = sk_msg_elem(msg, i);
>  		size = (apply && apply_bytes < sge->length) ?
>  			apply_bytes : sge->length;
> -		if (!sk_wmem_schedule(sk, size)) {
> +		if (!sk_rmem_schedule(sk, size)) {
>  			if (!copied)
>  				ret = -ENOMEM;
>  			break;
> -- 
> 1.8.3.1
> 
> 

From the commit message I'm not really sure about the intention of this
patch, however it seems the existing kernel implementation is correct.
Changing sk_wmem_schedule -> sk_rmem_schedule breaks the kernel compilation
because those 2 functions even have different input parameters.

Please see the Patchwork results for details:
https://patchwork.kernel.org/project/netdevbpf/patch/ae2569fa-f34a-40d6-9a03-33a455fbb9ea@chinatelecom.cn/

Thanks,
Michal

Nacked-by: Michal Kubiak <michal.kubiak@intel.com>

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2024-07-04 14:38 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-07-04  4:39 [PATCH] bpf, sockmap: Use sk_rmem_schedule in bpf_tcp_ingress zhengguoyong
2024-07-04 14:38 ` Michal Kubiak

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox