netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Paul Durrant <Paul.Durrant@citrix.com>
To: 'Joao Martins' <joao.m.martins@oracle.com>
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	Wei Liu <wei.liu2@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: RE: [PATCH net-next v1] xen-netback: make copy batch size configurable
Date: Mon, 13 Nov 2017 11:58:03 +0000	[thread overview]
Message-ID: <40fc53458a524c64af50b48e43bfd251@AMSPEX02CL03.citrite.net> (raw)
In-Reply-To: <8c18502b-11ff-be33-a584-a8bdf8960292@oracle.com>

> -----Original Message-----
> From: Joao Martins [mailto:joao.m.martins@oracle.com]
> Sent: 13 November 2017 11:54
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: netdev@vger.kernel.org; Wei Liu <wei.liu2@citrix.com>; xen-
> devel@lists.xenproject.org
> Subject: Re: [PATCH net-next v1] xen-netback: make copy batch size
> configurable
> 
> On 11/13/2017 10:33 AM, Paul Durrant wrote:
> >> -----Original Message-----
> >> From: Joao Martins [mailto:joao.m.martins@oracle.com]
> >> Sent: 10 November 2017 19:35
> >> To: netdev@vger.kernel.org
> >> Cc: Joao Martins <joao.m.martins@oracle.com>; Wei Liu
> >> <wei.liu2@citrix.com>; Paul Durrant <Paul.Durrant@citrix.com>; xen-
> >> devel@lists.xenproject.org
> >> Subject: [PATCH net-next v1] xen-netback: make copy batch size
> >> configurable
> >>
> >> Commit eb1723a29b9a ("xen-netback: refactor guest rx") refactored Rx
> >> handling and as a result decreased max grant copy ops from 4352 to 64.
> >> Before this commit it would drain the rx_queue (while there are
> >> enough slots in the ring to put packets) then copy to all pages and write
> >> responses on the ring. With the refactor we do almost the same albeit
> >> the last two steps are done every COPY_BATCH_SIZE (64) copies.
> >>
> >> For big packets, the value of 64 means copying 3 packets best case
> scenario
> >> (17 copies) and worst-case only 1 packet (34 copies, i.e. if all frags
> >> plus head cross the 4k grant boundary) which could be the case when
> >> packets go from local backend process.
> >>
> >> Instead of making it static to 64 grant copies, lets allow the user to
> >> select its value (while keeping the current as default) by introducing
> >> the `copy_batch_size` module parameter. This allows users to select
> >> the higher batches (i.e. for better throughput with big packets) as it
> >> was prior to the above mentioned commit.
> >>
> >> Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
> >> ---
> >>  drivers/net/xen-netback/common.h    |  6 ++++--
> >>  drivers/net/xen-netback/interface.c | 25
> ++++++++++++++++++++++++-
> >>  drivers/net/xen-netback/netback.c   |  5 +++++
> >>  drivers/net/xen-netback/rx.c        |  5 ++++-
> >>  4 files changed, 37 insertions(+), 4 deletions(-)
> >>
> >> diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-
> >> netback/common.h
> >> index a46a1e94505d..a5fe36e098a7 100644
> >> --- a/drivers/net/xen-netback/common.h
> >> +++ b/drivers/net/xen-netback/common.h
> >> @@ -129,8 +129,9 @@ struct xenvif_stats {
> >>  #define COPY_BATCH_SIZE 64
> >>
> >>  struct xenvif_copy_state {
> >> -	struct gnttab_copy op[COPY_BATCH_SIZE];
> >> -	RING_IDX idx[COPY_BATCH_SIZE];
> >> +	struct gnttab_copy *op;
> >> +	RING_IDX *idx;
> >> +	unsigned int size;
> >
> > Could you name this batch_size, or something like that to make it clear
> what it means?
> >
> Yeap, will change it.
> 
> >>  	unsigned int num;
> >>  	struct sk_buff_head *completed;
> >>  };
> >> @@ -381,6 +382,7 @@ extern unsigned int rx_drain_timeout_msecs;
> >>  extern unsigned int rx_stall_timeout_msecs;
> >>  extern unsigned int xenvif_max_queues;
> >>  extern unsigned int xenvif_hash_cache_size;
> >> +extern unsigned int xenvif_copy_batch_size;
> >>
> >>  #ifdef CONFIG_DEBUG_FS
> >>  extern struct dentry *xen_netback_dbg_root;
> >> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-
> >> netback/interface.c
> >> index d6dff347f896..a558868a883f 100644
> >> --- a/drivers/net/xen-netback/interface.c
> >> +++ b/drivers/net/xen-netback/interface.c
> >> @@ -516,7 +516,20 @@ struct xenvif *xenvif_alloc(struct device *parent,
> >> domid_t domid,
> >>
> >>  int xenvif_init_queue(struct xenvif_queue *queue)
> >>  {
> >> +	int size = xenvif_copy_batch_size;
> >
> > unsigned int
> >>>  	int err, i;
> >> +	void *addr;
> >> +
> >> +	addr = vzalloc(size * sizeof(struct gnttab_copy));
> >
> > Does the memory need to be zeroed?
> >
> It doesn't need to be but given that xenvif_queue is zeroed (which included
> this
> region) thus thought I would leave the same way.

Ok.

> 
> >> +	if (!addr)
> >> +		goto err;
> >> +	queue->rx_copy.op = addr;
> >> +
> >> +	addr = vzalloc(size * sizeof(RING_IDX));
> >
> > Likewise.
> >
> >> +	if (!addr)
> >> +		goto err;
> >> +	queue->rx_copy.idx = addr;
> >> +	queue->rx_copy.size = size;
> >>
> >>  	queue->credit_bytes = queue->remaining_credit = ~0UL;
> >>  	queue->credit_usec  = 0UL;
> >> @@ -544,7 +557,7 @@ int xenvif_init_queue(struct xenvif_queue
> *queue)
> >>  				 queue->mmap_pages);
> >>  	if (err) {
> >>  		netdev_err(queue->vif->dev, "Could not reserve
> >> mmap_pages\n");
> >> -		return -ENOMEM;
> >> +		goto err;
> >>  	}
> >>
> >>  	for (i = 0; i < MAX_PENDING_REQS; i++) {
> >> @@ -556,6 +569,13 @@ int xenvif_init_queue(struct xenvif_queue
> *queue)
> >>  	}
> >>
> >>  	return 0;
> >> +
> >> +err:
> >> +	if (queue->rx_copy.op)
> >> +		vfree(queue->rx_copy.op);
> >
> > vfree is safe to be called with NULL.
> >
> Oh, almost forgot - thanks.
> 
> >> +	if (queue->rx_copy.idx)
> >> +		vfree(queue->rx_copy.idx);
> >> +	return -ENOMEM;
> >>  }
> >>
> >>  void xenvif_carrier_on(struct xenvif *vif)
> >> @@ -788,6 +808,9 @@ void xenvif_disconnect_ctrl(struct xenvif *vif)
> >>   */
> >>  void xenvif_deinit_queue(struct xenvif_queue *queue)
> >>  {
> >> +	vfree(queue->rx_copy.op);
> >> +	vfree(queue->rx_copy.idx);
> >> +	queue->rx_copy.size = 0;
> >>  	gnttab_free_pages(MAX_PENDING_REQS, queue->mmap_pages);
> >>  }
> >>
> >> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-
> >> netback/netback.c
> >> index a27daa23c9dc..3a5e1d7ac2f4 100644
> >> --- a/drivers/net/xen-netback/netback.c
> >> +++ b/drivers/net/xen-netback/netback.c
> >> @@ -96,6 +96,11 @@ unsigned int xenvif_hash_cache_size =
> >> XENVIF_HASH_CACHE_SIZE_DEFAULT;
> >>  module_param_named(hash_cache_size, xenvif_hash_cache_size, uint,
> >> 0644);
> >>  MODULE_PARM_DESC(hash_cache_size, "Number of flows in the hash
> >> cache");
> >>
> >> +/* This is the maximum batch of grant copies on Rx */
> >> +unsigned int xenvif_copy_batch_size = COPY_BATCH_SIZE;
> >> +module_param_named(copy_batch_size, xenvif_copy_batch_size, uint,
> >> 0644);
> >> +MODULE_PARM_DESC(copy_batch_size, "Maximum batch of grant
> copies
> >> on Rx");
> >> +
> >>  static void xenvif_idx_release(struct xenvif_queue *queue, u16
> >> pending_idx,
> >>  			       u8 status);
> >>
> >> diff --git a/drivers/net/xen-netback/rx.c b/drivers/net/xen-netback/rx.c
> >> index b1cf7c6f407a..793a85f61f9d 100644
> >> --- a/drivers/net/xen-netback/rx.c
> >> +++ b/drivers/net/xen-netback/rx.c
> >> @@ -168,11 +168,14 @@ static void xenvif_rx_copy_add(struct
> >> xenvif_queue *queue,
> >>  			       struct xen_netif_rx_request *req,
> >>  			       unsigned int offset, void *data, size_t len)
> >>  {
> >> +	unsigned int batch_size;
> >>  	struct gnttab_copy *op;
> >>  	struct page *page;
> >>  	struct xen_page_foreign *foreign;
> >>
> >> -	if (queue->rx_copy.num == COPY_BATCH_SIZE)
> >> +	batch_size = min(xenvif_copy_batch_size, queue->rx_copy.size);
> >
> > Surely queue->rx_copy.size and xenvif_copy_batch_size are always
> identical? Why do you need this statement (and hence stack variable)?
> >
> This statement was to allow to be changed dynamically and would affect all
> newly
> created guests or running guests if value happened to be smaller than initially
> allocated. But I suppose I should make behaviour more consistent with the
> other
> params we have right now and just look at initially allocated one
> `queue->rx_copy.batch_size` ?

Yes, that would certainly be consistent but I can see value in allowing it to be dynamically tuned, so perhaps adding some re-allocation code to allow the batch to be grown as well as shrunk might be nice.

  Paul

> 
> Joao

  reply	other threads:[~2017-11-13 11:58 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-11-10 19:34 [PATCH net-next v1] xen-netback: make copy batch size configurable Joao Martins
2017-11-13 10:33 ` Paul Durrant
2017-11-13 10:50   ` [Xen-devel] " Jan Beulich
2017-11-13 11:03     ` Paul Durrant
2017-11-13 11:54   ` Joao Martins
2017-11-13 11:58     ` Paul Durrant [this message]
2017-11-13 16:34       ` Joao Martins
2017-11-13 16:39         ` Paul Durrant
2017-11-13 16:53           ` Joao Martins

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=40fc53458a524c64af50b48e43bfd251@AMSPEX02CL03.citrite.net \
    --to=paul.durrant@citrix.com \
    --cc=joao.m.martins@oracle.com \
    --cc=netdev@vger.kernel.org \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).