public inbox for netdev@vger.kernel.org
 help / color / mirror / Atom feed
From: Alexei Starovoitov <alexei.starovoitov@gmail.com>
To: Daniel Borkmann <daniel@iogearbox.net>
Cc: Alexei Starovoitov <ast@kernel.org>,
	davem@davemloft.net, jakub.kicinski@netronome.com,
	netdev@vger.kernel.org, kernel-team@fb.com
Subject: Re: [PATCH bpf-next 1/9] bpf: introduce bpf_spin_lock
Date: Wed, 16 Jan 2019 15:23:06 -0800	[thread overview]
Message-ID: <20190116232304.llq3gpmr2yyrufby@ast-mbp> (raw)
In-Reply-To: <cb773e76-e185-d53b-51f4-b78162d99038@iogearbox.net>

On Wed, Jan 16, 2019 at 11:48:15PM +0100, Daniel Borkmann wrote:
> 
> I think if I'm not mistaken there should still be a possibility for causing a
> deadlock, namely if in the middle of the critical section I'm using an LD_ABS
> or LD_IND instruction with oob index such that I cause an implicit return 0
> while lock is held. At least I don't see this being caught, probably also for
> such case a test_verifier snippet would be good.

good catch. My earlier implementation was reusing check_reference_leak()
that is called for both bpf_exit and bpf_ld_abs, but then I realized we cannot
call bpf_exit from callee when lock is held and moved that check before
prepare_func_exit() forgetting about ldabs. argh. Will fix.

> Wouldn't we also need to mark queued spinlock functions as notrace such that
> e.g. from kprobe one cannot attach to these causing a deadlock?

there is recursion check already, so I'm not sure that is necessary, but
will add it since it doesn't hurt and safer indeed.


  parent reply	other threads:[~2019-01-16 23:23 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-01-16  5:08 [PATCH bpf-next 0/9] introduce bpf_spin_lock Alexei Starovoitov
2019-01-16  5:08 ` [PATCH bpf-next 1/9] bpf: " Alexei Starovoitov
2019-01-16 22:48   ` Daniel Borkmann
2019-01-16 23:16     ` Daniel Borkmann
2019-01-16 23:30       ` Alexei Starovoitov
2019-01-17  0:21         ` Daniel Borkmann
2019-01-17  1:16           ` Alexei Starovoitov
2019-01-17 11:27             ` Daniel Borkmann
2019-01-18  5:51               ` Alexei Starovoitov
2019-01-16 23:23     ` Alexei Starovoitov [this message]
2019-01-17  0:16   ` Martin Lau
2019-01-17  1:02     ` Alexei Starovoitov
2019-01-16  5:08 ` [PATCH bpf-next 2/9] bpf: add support for bpf_spin_lock to cgroup local storage Alexei Starovoitov
2019-01-16  5:08 ` [PATCH bpf-next 3/9] tools/bpf: sync include/uapi/linux/bpf.h Alexei Starovoitov
2019-01-16  5:08 ` [PATCH bpf-next 4/9] selftests/bpf: add bpf_spin_lock tests Alexei Starovoitov
2019-01-16  5:08 ` [PATCH bpf-next 5/9] selftests/bpf: add bpf_spin_lock C test Alexei Starovoitov
2019-01-16  5:08 ` [PATCH bpf-next 8/9] libbpf: introduce bpf_map_lookup_elem_flags() Alexei Starovoitov
2019-01-16  6:18   ` Yonghong Song
2019-01-16  6:28     ` Alexei Starovoitov
2019-01-16  5:08 ` [PATCH bpf-next 9/9] selftests/bpf: test for BPF_F_LOCK Alexei Starovoitov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190116232304.llq3gpmr2yyrufby@ast-mbp \
    --to=alexei.starovoitov@gmail.com \
    --cc=ast@kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=jakub.kicinski@netronome.com \
    --cc=kernel-team@fb.com \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox