BPF List
 help / color / mirror / Atom feed
From: sdf@google.com
To: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Alexei Starovoitov <ast@kernel.org>,
	Daniel Borkmann <daniel@iogearbox.net>,
	"David S . Miller" <davem@davemloft.net>,
	netdev <netdev@vger.kernel.org>,
	Eric Dumazet <edumazet@google.com>, bpf <bpf@vger.kernel.org>,
	Brian Vazquez <brianvv@google.com>,
	syzbot <syzkaller@googlegroups.com>
Subject: Re: [PATCH bpf] bpf: add schedule points in batch ops
Date: Thu, 17 Feb 2022 10:36:13 -0800	[thread overview]
Message-ID: <Yg6VneDwR9su3D4u@google.com> (raw)
In-Reply-To: <20220217181902.808742-1-eric.dumazet@gmail.com>

On 02/17, Eric Dumazet wrote:
> From: Eric Dumazet <edumazet@google.com>

> syzbot reported various soft lockups caused by bpf batch operations.

>   INFO: task kworker/1:1:27 blocked for more than 140 seconds.
>   INFO: task hung in rcu_barrier

> Nothing prevents batch ops to process huge amount of data,
> we need to add schedule points in them.

> Note that maybe_wait_bpf_programs(map) calls from
> generic_map_delete_batch() can be factorized by moving
> the call after the loop.

> This will be done later in -next tree once we get this fix merged,
> unless there is strong opinion doing this optimization sooner.

> Fixes: aa2e93b8e58e ("bpf: Add generic support for update and delete  
> batch ops")
> Fixes: cb4d03ab499d ("bpf: Add generic support for lookup batch op")
> Signed-off-by: Eric Dumazet <edumazet@google.com>
> Cc: Brian Vazquez <brianvv@google.com>
> Cc: Stanislav Fomichev <sdf@google.com>

Looks good, thank you!

Reviewed-by: Stanislav Fomichev <sdf@google.com>

> Reported-by: syzbot <syzkaller@googlegroups.com>
> ---
>   kernel/bpf/syscall.c | 3 +++
>   1 file changed, 3 insertions(+)

> diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
> index  
> fa4505f9b6119bcb219ab9733847a98da65d1b21..ca70fe6fba387937dfb54f10826f19ac55a8a8e7  
> 100644
> --- a/kernel/bpf/syscall.c
> +++ b/kernel/bpf/syscall.c
> @@ -1355,6 +1355,7 @@ int generic_map_delete_batch(struct bpf_map *map,
>   		maybe_wait_bpf_programs(map);
>   		if (err)
>   			break;
> +		cond_resched();
>   	}
>   	if (copy_to_user(&uattr->batch.count, &cp, sizeof(cp)))
>   		err = -EFAULT;
> @@ -1412,6 +1413,7 @@ int generic_map_update_batch(struct bpf_map *map,

>   		if (err)
>   			break;
> +		cond_resched();
>   	}

>   	if (copy_to_user(&uattr->batch.count, &cp, sizeof(cp)))
> @@ -1509,6 +1511,7 @@ int generic_map_lookup_batch(struct bpf_map *map,
>   		swap(prev_key, key);
>   		retry = MAP_LOOKUP_RETRIES;
>   		cp++;
> +		cond_resched();
>   	}

>   	if (err == -EFAULT)
> --
> 2.35.1.265.g69c8d7142f-goog


  reply	other threads:[~2022-02-17 18:36 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-02-17 18:19 [PATCH bpf] bpf: add schedule points in batch ops Eric Dumazet
2022-02-17 18:36 ` sdf [this message]
2022-02-17 18:37 ` Brian Vazquez
2022-02-17 19:00 ` patchwork-bot+netdevbpf

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Yg6VneDwR9su3D4u@google.com \
    --to=sdf@google.com \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=brianvv@google.com \
    --cc=daniel@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=eric.dumazet@gmail.com \
    --cc=netdev@vger.kernel.org \
    --cc=syzkaller@googlegroups.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox