netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Toshiaki Makita <toshiaki.makita1@gmail.com>
To: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: "Alexei Starovoitov" <ast@kernel.org>,
	"Daniel Borkmann" <daniel@iogearbox.net>,
	"David S. Miller" <davem@davemloft.net>,
	"Jakub Kicinski" <jakub.kicinski@netronome.com>,
	"Jesper Dangaard Brouer" <hawk@kernel.org>,
	"John Fastabend" <john.fastabend@gmail.com>,
	netdev@vger.kernel.org, xdp-newbies@vger.kernel.org,
	bpf@vger.kernel.org, "Toke Høiland-Jørgensen" <toke@redhat.com>,
	"Jason Wang" <jasowang@redhat.com>,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Subject: Re: [PATCH v2 bpf-next 1/2] xdp: Add tracepoint for bulk XDP_TX
Date: Mon, 10 Jun 2019 20:41:16 +0900	[thread overview]
Message-ID: <cf0bed1c-aa70-7c67-3735-c0137f0fbc5d@gmail.com> (raw)
In-Reply-To: <20190607113220.1ea4093a@carbon>

On 2019/06/07 18:32, Jesper Dangaard Brouer wrote:
> On Fri, 7 Jun 2019 11:22:00 +0900
> Toshiaki Makita <toshiaki.makita1@gmail.com> wrote:
> 
>> On 2019/06/07 4:41, Jesper Dangaard Brouer wrote:
>>> On Thu, 6 Jun 2019 20:04:20 +0900
>>> Toshiaki Makita <toshiaki.makita1@gmail.com> wrote:
>>>    
>>>> On 2019/06/05 16:59, Jesper Dangaard Brouer wrote:
>>>>> On Wed,  5 Jun 2019 14:36:12 +0900
>>>>> Toshiaki Makita <toshiaki.makita1@gmail.com> wrote:
>>>>>       
> [...]
>>>>
>>>> So... prog_id is the problem. The program can be changed while we are
>>>> enqueueing packets to the bulk queue, so the prog_id at flush may be an
>>>> unexpected one.
>>>
>>> Hmmm... that sounds problematic, if the XDP bpf_prog for veth can
>>> change underneath, before the flush.  Our redirect system, depend on
>>> things being stable until the xdp_do_flush_map() operation, as will
>>> e.g. set per-CPU (bpf_redirect_info) map_to_flush pointer (which depend
>>> on XDP prog), and expect it to be correct/valid.
>>
>> Sorry, I don't get how maps depend on programs.
> 
> BPF/XDP programs have a reference count on the map (e.g. used for
> redirect) and when the XDP is removed, and last refcnt for the map is
> reached, then the map is also removed (redirect maps does a call_rcu
> when shutdown).

Thanks, now I understand what you mean.

>> At least xdp_do_redirect_map() handles map_to_flush change during NAPI.
>> Is there a problem when the map is not changed but the program is changed?
>> Also I believe this is not veth-specific behavior. Looking at tun and
>> i40e, they seem to change xdp_prog without stopping data path.
>   
> I guess this could actually happen, but we are "saved" by the
> 'map_to_flush' (pointer) is still valid due to RCU protection.
> 
> But it does look fishy, as our rcu_read_lock's does not encapsulation
> this. There is RCU-read-section in veth_xdp_rcv_skb(), which via can
> call xdp_do_redirect() which set per-CPU ri->map_to_flush.
> 
> Do we get this protection by running under softirq, and does this
> prevent an RCU grace-period (call_rcu callbacks) from happening?
> (between veth_xdp_rcv_skb() and xdp_do_flush_map() in veth_poll())

We are trying to avoid the problem in dev_map_free()?

	/* To ensure all pending flush operations have completed wait for flush
	 * bitmap to indicate all flush_needed bits to be zero on _all_ cpus.
	 * Because the above synchronize_rcu() ensures the map is disconnected
	 * from the program we can assume no new bits will be set.
	 */
	for_each_online_cpu(cpu) {
		unsigned long *bitmap = per_cpu_ptr(dtab->flush_needed, cpu);

		while (!bitmap_empty(bitmap, dtab->map.max_entries))
			cond_resched();
	}

Not sure if this is working as expected.

> 
> 
> To Toshiaki, regarding your patch 2/2, you are not affected by this
> per-CPU map storing, as you pass along the bulk-queue.  I do see you
> point, with prog_id could change.  Could you change the tracepoint to
> include the 'act' and place 'ifindex' above this in the struct, this way
> the 'act' member is in the same location/offset as other XDP
> tracepoints.  I see the 'ifindex' as the identifier for this tracepoint
> (other have map_id or prog_id in this location).

Sure, thanks.

Toshiaki Makita

  reply	other threads:[~2019-06-10 11:41 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-06-05  5:36 [PATCH v2 bpf-next 0/2] veth: Bulk XDP_TX Toshiaki Makita
2019-06-05  5:36 ` [PATCH v2 bpf-next 1/2] xdp: Add tracepoint for bulk XDP_TX Toshiaki Makita
2019-06-05  7:59   ` Jesper Dangaard Brouer
2019-06-06 11:04     ` Toshiaki Makita
2019-06-06 19:41       ` Jesper Dangaard Brouer
2019-06-07  2:22         ` Toshiaki Makita
2019-06-07  9:32           ` Jesper Dangaard Brouer
2019-06-10 11:41             ` Toshiaki Makita [this message]
2019-06-05  5:36 ` [PATCH v2 bpf-next 2/2] veth: Support " Toshiaki Makita

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=cf0bed1c-aa70-7c67-3735-c0137f0fbc5d@gmail.com \
    --to=toshiaki.makita1@gmail.com \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=brouer@redhat.com \
    --cc=daniel@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=hawk@kernel.org \
    --cc=jakub.kicinski@netronome.com \
    --cc=jasowang@redhat.com \
    --cc=john.fastabend@gmail.com \
    --cc=netdev@vger.kernel.org \
    --cc=paulmck@linux.vnet.ibm.com \
    --cc=toke@redhat.com \
    --cc=xdp-newbies@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).