linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jeff Moyer <jmoyer@redhat.com>
To: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>
Cc: bcrl@kvack.org, viro@zeniv.linux.org.uk, linux-aio@kvack.org,
	linux-fsdevel@vger.kernel.org, lekshmi.cpillai@in.ibm.com,
	nguyenp@us.ibm.com
Subject: Re: [PATCH RESEND] fs: aio: fix the increment of aio-nr and counting against aio-max-nr
Date: Wed, 05 Jul 2017 15:28:14 -0400	[thread overview]
Message-ID: <x49wp7madkh.fsf@segfault.boston.devel.redhat.com> (raw)
In-Reply-To: <1499262796-4022-1-git-send-email-mauricfo@linux.vnet.ibm.com> (Mauricio Faria de Oliveira's message of "Wed, 5 Jul 2017 10:53:16 -0300")

Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com> writes:

> Currently, aio-nr is incremented in steps of 'num_possible_cpus() * 8'
> for io_setup(nr_events, ..) with 'nr_events < num_possible_cpus() * 4':
>
>     ioctx_alloc()
>     ...
>         nr_events = max(nr_events, num_possible_cpus() * 4);
>         nr_events *= 2;
>     ...
>         ctx->max_reqs = nr_events;
>     ...
>         aio_nr += ctx->max_reqs;
>     ....
>
> This limits the number of aio contexts actually available to much less
> than aio-max-nr, and is increasingly worse with greater number of CPUs.
>
> For example, with 64 CPUs, only 256 aio contexts are actually available
> (with aio-max-nr = 65536) because the increment is 512 in that scenario.
>
> Note: 65536 [max aio contexts] / (64*4*2) [increment per aio context]
> is 128, but make it 256 (double) as counting against 'aio-max-nr * 2':
>
>     ioctx_alloc()
>     ...
>         if (aio_nr + nr_events > (aio_max_nr * 2UL) ||
>         ...
>             goto err_ctx;
>     ...
>
> This patch uses the original value of nr_events (from userspace) to
> increment aio-nr and count against aio-max-nr, which resolves those.
>
> Signed-off-by: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>
> Reported-by: Lekshmi C. Pillai <lekshmi.cpillai@in.ibm.com>
> Tested-by: Lekshmi C. Pillai <lekshmi.cpillai@in.ibm.com>
> Tested-by: Paul Nguyen <nguyenp@us.ibm.com>

Thanks for your persistence in re-posting this.  The fix looks good to
me.  Ben, can you queue this up?

Reviewed-by: Jeff Moyer <jmoyer@redhat.com>

> ---
>  fs/aio.c | 19 ++++++++++++-------
>  1 file changed, 12 insertions(+), 7 deletions(-)
>
> diff --git a/fs/aio.c b/fs/aio.c
> index f52d925ee259..3908480d7ccd 100644
> --- a/fs/aio.c
> +++ b/fs/aio.c
> @@ -441,10 +441,9 @@ static int aio_migratepage(struct address_space *mapping, struct page *new,
>  #endif
>  };
>  
> -static int aio_setup_ring(struct kioctx *ctx)
> +static int aio_setup_ring(struct kioctx *ctx, unsigned int nr_events)
>  {
>  	struct aio_ring *ring;
> -	unsigned nr_events = ctx->max_reqs;
>  	struct mm_struct *mm = current->mm;
>  	unsigned long size, unused;
>  	int nr_pages;
> @@ -707,6 +706,12 @@ static struct kioctx *ioctx_alloc(unsigned nr_events)
>  	int err = -ENOMEM;
>  
>  	/*
> +	 * Store the original nr_events -- what userspace passed to io_setup(),
> +	 * for counting against the global limit -- before it changes.
> +	 */
> +	unsigned int max_reqs = nr_events;
> +
> +	/*
>  	 * We keep track of the number of available ringbuffer slots, to prevent
>  	 * overflow (reqs_available), and we also use percpu counters for this.
>  	 *
> @@ -724,14 +729,14 @@ static struct kioctx *ioctx_alloc(unsigned nr_events)
>  		return ERR_PTR(-EINVAL);
>  	}
>  
> -	if (!nr_events || (unsigned long)nr_events > (aio_max_nr * 2UL))
> +	if (!nr_events || (unsigned long)max_reqs > aio_max_nr)
>  		return ERR_PTR(-EAGAIN);
>  
>  	ctx = kmem_cache_zalloc(kioctx_cachep, GFP_KERNEL);
>  	if (!ctx)
>  		return ERR_PTR(-ENOMEM);
>  
> -	ctx->max_reqs = nr_events;
> +	ctx->max_reqs = max_reqs;
>  
>  	spin_lock_init(&ctx->ctx_lock);
>  	spin_lock_init(&ctx->completion_lock);
> @@ -753,7 +758,7 @@ static struct kioctx *ioctx_alloc(unsigned nr_events)
>  	if (!ctx->cpu)
>  		goto err;
>  
> -	err = aio_setup_ring(ctx);
> +	err = aio_setup_ring(ctx, nr_events);
>  	if (err < 0)
>  		goto err;
>  
> @@ -764,8 +769,8 @@ static struct kioctx *ioctx_alloc(unsigned nr_events)
>  
>  	/* limit the number of system wide aios */
>  	spin_lock(&aio_nr_lock);
> -	if (aio_nr + nr_events > (aio_max_nr * 2UL) ||
> -	    aio_nr + nr_events < aio_nr) {
> +	if (aio_nr + ctx->max_reqs > aio_max_nr ||
> +	    aio_nr + ctx->max_reqs < aio_nr) {
>  		spin_unlock(&aio_nr_lock);
>  		err = -EAGAIN;
>  		goto err_ctx;

  parent reply	other threads:[~2017-07-05 19:28 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-07-05 13:53 [PATCH RESEND] fs: aio: fix the increment of aio-nr and counting against aio-max-nr Mauricio Faria de Oliveira
2017-07-05 13:59 ` Mauricio Faria de Oliveira
2017-07-05 19:28 ` Jeff Moyer [this message]
2017-07-06 21:07   ` Benjamin LaHaise
2017-07-06 22:25     ` Jeff Moyer
2017-07-07 12:44     ` Mauricio Faria de Oliveira
2017-07-14 23:58     ` Mauricio Faria de Oliveira
2017-09-07  3:04     ` Mauricio Faria de Oliveira

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=x49wp7madkh.fsf@segfault.boston.devel.redhat.com \
    --to=jmoyer@redhat.com \
    --cc=bcrl@kvack.org \
    --cc=lekshmi.cpillai@in.ibm.com \
    --cc=linux-aio@kvack.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=mauricfo@linux.vnet.ibm.com \
    --cc=nguyenp@us.ibm.com \
    --cc=viro@zeniv.linux.org.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).