netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Jason Wang <jasowang@redhat.com>
Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH V2 net-next 1/7] ptr_ring: introduce batch dequeuing
Date: Fri, 31 Mar 2017 17:31:41 +0300	[thread overview]
Message-ID: <20170331151619-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <941892c4-40f6-9dea-3922-fa02afdc8208@redhat.com>

On Fri, Mar 31, 2017 at 11:52:24AM +0800, Jason Wang wrote:
> 
> 
> On 2017年03月30日 21:53, Michael S. Tsirkin wrote:
> > On Thu, Mar 30, 2017 at 03:22:24PM +0800, Jason Wang wrote:
> > > This patch introduce a batched version of consuming, consumer can
> > > dequeue more than one pointers from the ring at a time. We don't care
> > > about the reorder of reading here so no need for compiler barrier.
> > > 
> > > Signed-off-by: Jason Wang <jasowang@redhat.com>
> > > ---
> > >   include/linux/ptr_ring.h | 65 ++++++++++++++++++++++++++++++++++++++++++++++++
> > >   1 file changed, 65 insertions(+)
> > > 
> > > diff --git a/include/linux/ptr_ring.h b/include/linux/ptr_ring.h
> > > index 6c70444..2be0f350 100644
> > > --- a/include/linux/ptr_ring.h
> > > +++ b/include/linux/ptr_ring.h
> > > @@ -247,6 +247,22 @@ static inline void *__ptr_ring_consume(struct ptr_ring *r)
> > >   	return ptr;
> > >   }
> > > +static inline int __ptr_ring_consume_batched(struct ptr_ring *r,
> > > +					     void **array, int n)
> > Can we use a shorter name? ptr_ring_consume_batch?
> 
> Ok, but at least we need to keep the prefix since there's a locked version.
> 
> 
> 
> > 
> > > +{
> > > +	void *ptr;
> > > +	int i;
> > > +
> > > +	for (i = 0; i < n; i++) {
> > > +		ptr = __ptr_ring_consume(r);
> > > +		if (!ptr)
> > > +			break;
> > > +		array[i] = ptr;
> > > +	}
> > > +
> > > +	return i;
> > > +}
> > > +
> > >   /*
> > >    * Note: resize (below) nests producer lock within consumer lock, so if you
> > >    * call this in interrupt or BH context, you must disable interrupts/BH when
> > I'd like to add a code comment here explaining why we don't
> > care about cpu or compiler reordering. And I think the reason is
> > in the way you use this API: in vhost it does not matter
> > if you get less entries than present in the ring.
> > That's ok but needs to be noted
> > in a code comment so people use this function correctly.
> 
> Interesting, but I still think it's not necessary.
> 
> If consumer is doing a busy polling, it will eventually get the entries. If
> the consumer need notification from producer, it should drain the queue
> which means it need enable notification before last try of consuming call,
> otherwise it was a bug. The batch consuming function in this patch can
> guarantee return at least one pointer if there's many, this looks sufficient
> for the correctness?
> 
> Thanks

You ask for N entries but get N-1. This seems to imply the
ring is now empty. Do we guarantee this?


> > 
> > Also, I think you need to repeat the comment about cpu_relax
> > near this function: if someone uses it in a loop,
> > a compiler barrier is needed to prevent compiler from
> > optimizing it out.
> > 
> > I note that ptr_ring_consume currently lacks any of these
> > comments so I'm ok with merging as is, and I'll add
> > documentation on top.
> > Like this perhaps?
> > 
> > /* Consume up to n entries and return the number of entries consumed
> >   * or 0 on ring empty.
> >   * Note: this might return early with less entries than present in the
> >   * ring.
> >   * Note: callers invoking this in a loop must use a compiler barrier,
> >   * for example cpu_relax(). Callers must take consumer_lock
> >   * if the ring is ever resized - see e.g. ptr_ring_consume_batch.
> >   */
> > 
> > 
> > 
> > > @@ -297,6 +313,55 @@ static inline void *ptr_ring_consume_bh(struct ptr_ring *r)
> > >   	return ptr;
> > >   }
> > > +static inline int ptr_ring_consume_batched(struct ptr_ring *r,
> > > +					   void **array, int n)
> > > +{
> > > +	int ret;
> > > +
> > > +	spin_lock(&r->consumer_lock);
> > > +	ret = __ptr_ring_consume_batched(r, array, n);
> > > +	spin_unlock(&r->consumer_lock);
> > > +
> > > +	return ret;
> > > +}
> > > +
> > > +static inline int ptr_ring_consume_batched_irq(struct ptr_ring *r,
> > > +					       void **array, int n)
> > > +{
> > > +	int ret;
> > > +
> > > +	spin_lock_irq(&r->consumer_lock);
> > > +	ret = __ptr_ring_consume_batched(r, array, n);
> > > +	spin_unlock_irq(&r->consumer_lock);
> > > +
> > > +	return ret;
> > > +}
> > > +
> > > +static inline int ptr_ring_consume_batched_any(struct ptr_ring *r,
> > > +					       void **array, int n)
> > > +{
> > > +	unsigned long flags;
> > > +	int ret;
> > > +
> > > +	spin_lock_irqsave(&r->consumer_lock, flags);
> > > +	ret = __ptr_ring_consume_batched(r, array, n);
> > > +	spin_unlock_irqrestore(&r->consumer_lock, flags);
> > > +
> > > +	return ret;
> > > +}
> > > +
> > > +static inline int ptr_ring_consume_batched_bh(struct ptr_ring *r,
> > > +					      void **array, int n)
> > > +{
> > > +	int ret;
> > > +
> > > +	spin_lock_bh(&r->consumer_lock);
> > > +	ret = __ptr_ring_consume_batched(r, array, n);
> > > +	spin_unlock_bh(&r->consumer_lock);
> > > +
> > > +	return ret;
> > > +}
> > > +
> > >   /* Cast to structure type and call a function without discarding from FIFO.
> > >    * Function must return a value.
> > >    * Callers must take consumer_lock.
> > > -- 
> > > 2.7.4

  reply	other threads:[~2017-03-31 14:31 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-03-30  7:22 [PATCH V2 net-next 0/7] vhost-net rx batching Jason Wang
2017-03-30  7:22 ` [PATCH V2 net-next 1/7] ptr_ring: introduce batch dequeuing Jason Wang
2017-03-30 13:53   ` Michael S. Tsirkin
2017-03-31  3:52     ` Jason Wang
2017-03-31 14:31       ` Michael S. Tsirkin [this message]
2017-04-01  5:14         ` Jason Wang
2017-03-30  7:22 ` [PATCH V2 net-next 2/7] skb_array: " Jason Wang
2017-03-30  7:22 ` [PATCH V2 net-next 3/7] tun: export skb_array Jason Wang
2017-03-30  7:22 ` [PATCH V2 net-next 4/7] tap: " Jason Wang
2017-03-30  7:22 ` [PATCH V2 net-next 5/7] tun: support receiving skb through msg_control Jason Wang
2017-03-30 15:06   ` Michael S. Tsirkin
2017-03-31  4:10     ` Jason Wang
2017-03-30  7:22 ` [PATCH V2 net-next 6/7] tap: support receiving skb from msg_control Jason Wang
2017-03-30 15:03   ` Michael S. Tsirkin
2017-03-31  4:07     ` Jason Wang
2017-03-30  7:22 ` [PATCH V2 net-next 7/7] vhost_net: try batch dequing from skb array Jason Wang
2017-03-30 14:21   ` Michael S. Tsirkin
2017-03-31  4:02     ` Jason Wang
2017-03-31  6:47       ` Jason Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170331151619-mutt-send-email-mst@kernel.org \
    --to=mst@redhat.com \
    --cc=jasowang@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).