netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: John Fastabend <john.fastabend@gmail.com>
To: "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>
Cc: john.r.fastabend@intel.com, netdev@vger.kernel.org,
	alexei.starovoitov@gmail.com, daniel@iogearbox.net
Subject: Re: [RFC PATCH] virtio_net: XDP support for adjust_head
Date: Thu, 5 Jan 2017 14:57:23 -0800	[thread overview]
Message-ID: <586ECF53.1070700@gmail.com> (raw)
In-Reply-To: <20170104001332-mutt-send-email-mst@kernel.org>

On 17-01-03 02:16 PM, Michael S. Tsirkin wrote:
> On Tue, Jan 03, 2017 at 02:01:27PM +0800, Jason Wang wrote:
>>
>>
>> On 2017年01月03日 03:44, John Fastabend wrote:
>>> Add support for XDP adjust head by allocating a 256B header region
>>> that XDP programs can grow into. This is only enabled when a XDP
>>> program is loaded.
>>>
>>> In order to ensure that we do not have to unwind queue headroom push
>>> queue setup below bpf_prog_add. It reads better to do a prog ref
>>> unwind vs another queue setup call.
>>>
>>> : There is a problem with this patch as is. When xdp prog is loaded
>>>    the old buffers without the 256B headers need to be flushed so that
>>>    the bpf prog has the necessary headroom. This patch does this by
>>>    calling the virtqueue_detach_unused_buf() and followed by the
>>>    virtnet_set_queues() call to reinitialize the buffers. However I
>>>    don't believe this is safe per comment in virtio_ring this API
>>>    is not valid on an active queue and the only thing we have done
>>>    here is napi_disable/napi_enable wrappers which doesn't do anything
>>>    to the emulation layer.
>>>
>>>    So the RFC is really to find the best solution to this problem.
>>>    A couple things come to mind, (a) always allocate the necessary
>>>    headroom but this is a bit of a waste (b) add some bit somewhere
>>>    to check if the buffer has headroom but this would mean XDP programs
>>>    would be broke for a cycle through the ring, (c) figure out how
>>>    to deactivate a queue, free the buffers and finally reallocate.
>>>    I think (c) is the best choice for now but I'm not seeing the
>>>    API to do this so virtio/qemu experts anyone know off-hand
>>>    how to make this work? I started looking into the PCI callbacks
>>>    reset() and virtio_device_ready() or possibly hitting the right
>>>    set of bits with vp_set_status() but my first attempt just hung
>>>    the device.
>>
>> Hi John:
>>
>> AFAIK, disabling a specific queue was supported only by virtio 1.0 through
>> queue_enable field in pci common cfg.
> 
> In fact 1.0 only allows enabling queues selectively.
> We can add disabling by a spec enhancement but
> for now reset is the only way.
> 
> 
>> But unfortunately, qemu does not
>> emulate this at all and legacy device does not even support this. So the
>> safe way is probably reset the device and redo the initialization here.
> 
> You will also have to re-apply rx filtering if you do this.
> Probably sending notification uplink.
> 

The following seems to hang the device on the next virtnet_send_command()
I expected this to meet the reset requirements from the spec because I
believe its the same flow coming out of restore(). For a real patch we
don't actually need to kfree all the structs and reallocate them but
I was expecting the below to work. Any ideas/hints?

static int virtnet_xdp_reset(struct virtnet_info *vi)
{
        int i, ret;

        netif_device_detach(vi->dev);
        cancel_delayed_work_sync(&vi->refill);
        if (netif_running(vi->dev)) {
                for (i = 0; i < vi->max_queue_pairs; i++)
                        napi_disable(&vi->rq[i].napi);
        }

        remove_vq_common(vi, false);
        ret = init_vqs(vi);
        if (ret)
                return ret;
        virtio_device_ready(vi->vdev);

        if (netif_running(vi->dev)) {
                for (i = 0; i < vi->curr_queue_pairs; i++)
                        if (!try_fill_recv(vi, &vi->rq[i], GFP_KERNEL))
                                schedule_delayed_work(&vi->refill, 0);

                for (i = 0; i < vi->max_queue_pairs; i++)
                        virtnet_napi_enable(&vi->rq[i]);
        }
        netif_device_attach(vi->dev);
        return 0;
}

  reply	other threads:[~2017-01-05 23:04 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-01-02 19:44 [RFC PATCH] virtio_net: XDP support for adjust_head John Fastabend
2017-01-02 19:47 ` John Fastabend
2017-01-03  6:01 ` Jason Wang
2017-01-03 16:54   ` John Fastabend
2017-01-03 16:57     ` John Fastabend
2017-01-04  3:22       ` Jason Wang
2017-01-04  3:21     ` Jason Wang
2017-01-03 22:16   ` Michael S. Tsirkin
2017-01-05 22:57     ` John Fastabend [this message]
2017-01-06  0:39       ` Michael S. Tsirkin
2017-01-06  3:28         ` John Fastabend
2017-01-04 18:58   ` John Fastabend
2017-01-05  3:10     ` Jason Wang
  -- strict thread matches above, loose matches on Subject: below --
2016-12-23 18:43 John Fastabend
2016-12-23 21:46 ` John Fastabend

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=586ECF53.1070700@gmail.com \
    --to=john.fastabend@gmail.com \
    --cc=alexei.starovoitov@gmail.com \
    --cc=daniel@iogearbox.net \
    --cc=jasowang@redhat.com \
    --cc=john.r.fastabend@intel.com \
    --cc=mst@redhat.com \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).