public inbox for netdev@vger.kernel.org
 help / color / mirror / Atom feed
From: Tiwei Bie <tiwei.bie@intel.com>
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org, wexu@redhat.com
Subject: Re: [RFC v2] virtio: support packed ring
Date: Wed, 18 Apr 2018 09:17:57 +0800	[thread overview]
Message-ID: <20180418011757.ldeju5zh3e4366m5@debian> (raw)
In-Reply-To: <20180417184810-mutt-send-email-mst@kernel.org>

On Tue, Apr 17, 2018 at 06:54:51PM +0300, Michael S. Tsirkin wrote:
> On Tue, Apr 17, 2018 at 10:56:26PM +0800, Tiwei Bie wrote:
> > On Tue, Apr 17, 2018 at 05:04:59PM +0300, Michael S. Tsirkin wrote:
> > > On Tue, Apr 17, 2018 at 08:47:16PM +0800, Tiwei Bie wrote:
> > > > On Tue, Apr 17, 2018 at 03:17:41PM +0300, Michael S. Tsirkin wrote:
> > > > > On Tue, Apr 17, 2018 at 10:51:33AM +0800, Tiwei Bie wrote:
> > > > > > On Tue, Apr 17, 2018 at 10:11:58AM +0800, Jason Wang wrote:
> > > > > > > On 2018年04月13日 15:15, Tiwei Bie wrote:
> > > > > > > > On Fri, Apr 13, 2018 at 12:30:24PM +0800, Jason Wang wrote:
> > > > > > > > > On 2018年04月01日 22:12, Tiwei Bie wrote:
> > > > > > [...]
> > > > > > > > > > +static int detach_buf_packed(struct vring_virtqueue *vq, unsigned int head,
> > > > > > > > > > +			      void **ctx)
> > > > > > > > > > +{
> > > > > > > > > > +	struct vring_packed_desc *desc;
> > > > > > > > > > +	unsigned int i, j;
> > > > > > > > > > +
> > > > > > > > > > +	/* Clear data ptr. */
> > > > > > > > > > +	vq->desc_state[head].data = NULL;
> > > > > > > > > > +
> > > > > > > > > > +	i = head;
> > > > > > > > > > +
> > > > > > > > > > +	for (j = 0; j < vq->desc_state[head].num; j++) {
> > > > > > > > > > +		desc = &vq->vring_packed.desc[i];
> > > > > > > > > > +		vring_unmap_one_packed(vq, desc);
> > > > > > > > > > +		desc->flags = 0x0;
> > > > > > > > > Looks like this is unnecessary.
> > > > > > > > It's safer to zero it. If we don't zero it, after we
> > > > > > > > call virtqueue_detach_unused_buf_packed() which calls
> > > > > > > > this function, the desc is still available to the
> > > > > > > > device.
> > > > > > > 
> > > > > > > Well detach_unused_buf_packed() should be called after device is stopped,
> > > > > > > otherwise even if you try to clear, there will still be a window that device
> > > > > > > may use it.
> > > > > > 
> > > > > > This is not about whether the device has been stopped or
> > > > > > not. We don't have other places to re-initialize the ring
> > > > > > descriptors and wrap_counter. So they need to be set to
> > > > > > the correct values when doing detach_unused_buf.
> > > > > > 
> > > > > > Best regards,
> > > > > > Tiwei Bie
> > > > > 
> > > > > find vqs is the time to do it.
> > > > 
> > > > The .find_vqs() will call .setup_vq() which will eventually
> > > > call vring_create_virtqueue(). It's a different case. Here
> > > > we're talking about re-initializing the descs and updating
> > > > the wrap counter when detaching the unused descs (In this
> > > > case, split ring just needs to decrease vring.avail->idx).
> > > > 
> > > > Best regards,
> > > > Tiwei Bie
> > > 
> > > There's no requirement that  virtqueue_detach_unused_buf re-initializes
> > > the descs. It happens on cleanup path just before drivers delete the
> > > vqs.
> > 
> > Cool, I wasn't aware of it. I saw split ring decrease
> > vring.avail->idx after detaching an unused desc, so I
> > thought detaching unused desc also needs to make sure
> > that the ring state will be updated correspondingly.
> 
> 
> Hmm. You are right. Seems to be out console driver being out of spec.
> Will have to look at how to fix that :(
> 
> It was done here:
> 
> Commit b3258ff1d6086bd2b9eeb556844a868ad7d49bc8
> Author: Amit Shah <amit.shah@redhat.com>
> Date:   Wed Mar 16 19:12:10 2011 +0530
> 
>     virtio: Decrement avail idx on buffer detach
>     
>     When detaching a buffer from a vq, the avail.idx value should be
>     decremented as well.
>     
>     This was noticed by hot-unplugging a virtio console port and then
>     plugging in a new one on the same number (re-using the vqs which were
>     just 'disowned').  qemu reported
>     
>        'Guest moved used index from 0 to 256'
>     
>     when any IO was attempted on the new port.
>     
>     CC: stable@kernel.org
>     Reported-by: juzhang <juzhang@redhat.com>
>     Signed-off-by: Amit Shah <amit.shah@redhat.com>
>     Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
> 
> The spec is quite explicit though:
> 	A driver MUST NOT decrement the available idx on a live virtqueue (ie. there is no way to “unexpose”
> 	buffers).
> 

Hmm.. Got it. Thanks!

Best regards,
Tiwei Bie


> 
> 
> 
> 
> > If there is no such requirement, do you think it's OK
> > to remove below two lines:
> > 
> > vq->avail_idx_shadow--;
> > vq->vring.avail->idx = cpu_to_virtio16(_vq->vdev, vq->avail_idx_shadow);
> > 
> > from virtqueue_detach_unused_buf(), and we could have
> > one generic function to handle both rings:
> > 
> > void *virtqueue_detach_unused_buf(struct virtqueue *_vq)
> > {
> > 	struct vring_virtqueue *vq = to_vvq(_vq);
> > 	unsigned int num, i;
> > 	void *buf;
> > 
> > 	START_USE(vq);
> > 
> > 	num = vq->packed ? vq->vring_packed.num : vq->vring.num;
> > 
> > 	for (i = 0; i < num; i++) {
> > 		if (!vq->desc_state[i].data)
> > 			continue;
> > 		/* detach_buf clears data, so grab it now. */
> > 		buf = vq->desc_state[i].data;
> > 		detach_buf(vq, i, NULL);
> > 		END_USE(vq);
> > 		return buf;
> > 	}
> > 	/* That should have freed everything. */
> > 	BUG_ON(vq->vq.num_free != num);
> > 
> > 	END_USE(vq);
> > 	return NULL;
> > }
> > 
> > Best regards,
> > Tiwei Bie
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

  reply	other threads:[~2018-04-18  1:17 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-04-01 14:12 [RFC v2] virtio: support packed ring Tiwei Bie
2018-04-10  2:55 ` Jason Wang
2018-04-10  3:21   ` Tiwei Bie
2018-04-13  4:30 ` Jason Wang
2018-04-13  7:15   ` Tiwei Bie
2018-04-17  2:11     ` Jason Wang
2018-04-17  2:17       ` Michael S. Tsirkin
2018-04-17  2:24         ` Jason Wang
2018-04-17  2:37           ` Michael S. Tsirkin
2018-04-17  2:51       ` Tiwei Bie
2018-04-17 12:17         ` Michael S. Tsirkin
2018-04-17 12:47           ` Tiwei Bie
2018-04-17 14:04             ` Michael S. Tsirkin
2018-04-17 14:56               ` Tiwei Bie
2018-04-17 15:54                 ` Michael S. Tsirkin
2018-04-18  1:17                   ` Tiwei Bie [this message]
2018-04-13 15:22 ` Michael S. Tsirkin
2018-04-14 11:22   ` Tiwei Bie
2018-04-23  5:42 ` Jason Wang
2018-04-23  9:29   ` Tiwei Bie
2018-04-24  0:54     ` Jason Wang
2018-04-24  1:05       ` Michael S. Tsirkin
2018-04-24  1:14         ` Jason Wang
2018-04-24  1:16         ` Tiwei Bie
2018-04-24  1:29           ` Michael S. Tsirkin
2018-04-24  1:37             ` Tiwei Bie
2018-04-24  1:43               ` Michael S. Tsirkin
2018-04-24  1:49                 ` Tiwei Bie

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180418011757.ldeju5zh3e4366m5@debian \
    --to=tiwei.bie@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mst@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=virtualization@lists.linux-foundation.org \
    --cc=wexu@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox