public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Jeremy Fitzhardinge <jeremy@goop.org>
To: Nick Piggin <npiggin@suse.de>
Cc: Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Subject: Re: [PATCH] vmap: add flag to allow lazy unmap to be disabled at runtime
Date: Tue, 27 Jul 2010 11:56:50 -0700	[thread overview]
Message-ID: <4C4F2BF2.6010903@goop.org> (raw)
In-Reply-To: <20100727082439.GB6332@amd>

  On 07/27/2010 01:24 AM, Nick Piggin wrote:
> On Mon, Jul 26, 2010 at 01:24:51PM -0700, Jeremy Fitzhardinge wrote:
>>
>> [ Nick, I forget if I sent this to you before.  Could you Ack it if it looks OK? Thanks, J ]
>>
>> Add a flag to force lazy_max_pages() to zero to prevent any outstanding
>> mapped pages.  We'll need this for Xen.
> You have sent this to me before, probably several times, and I always
> forget about it right as you send it again.
>
> It's no problem merging something like this for Xen, although as you
> know I would love to see an approach where Xen would benefit from
> delayed flushing as well :)

Yes indeed, that would be nice to get.  What it comes down to is we need 
to be able to flush any lazy vunmap aliases from within interrupt 
context, but the code really isn't set up to do that, and last time I 
tried to understand that code I couldn't see a straightforward way to 
make it work.   It would also be nice to have a way to shoot down the 
aliases for a specific page, assuming that's any more efficient than 
flushing everything.

I don't think anything has changed since we last talked about this.

> You will need to disable lazy flushing from the per-cpu allocator
> (vm_map_ram/vm_unmap_ram, which are used by XFS now). That's not
> tied to the lazy_max stuff (which it should be, arguably)

Ah, OK.  I should really add xfs to our roster of regularly tested 
filesystems, since it seems to play the most games.  Do you know of any 
other filesystems which do that kind of thing?

> That code basically allocates per-cpu chunks of va from the global
> allocator, uses them, then frees them back to the global allocator
> all without doing any TLB flushing.
>
> If you have to do global TLB flushing there, then it's probably not
> much value in per-cpu locking of the address allocator anyway, so
> you could just add a test for vmap_lazy_unmap in these branches:
>
>    if (likely(count<= VMAP_MAX_ALLOC)&&  !vmap_lazy_unmap)

We don't need to do any tlb flushing in these cases, because we're 
concerned about making sure we know what ptes a given page is mapped 
by.  The hypervisor will do any tlb flushing it requires to maintain its 
own invariants (so, for example, we can't use a stale tlb entry to keep 
accessing a page we've given back to Xen).

Thanks,
     J

      parent reply	other threads:[~2010-07-27 18:56 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-07-26 20:24 [PATCH] vmap: add flag to allow lazy unmap to be disabled at runtime Jeremy Fitzhardinge
2010-07-27  8:24 ` Nick Piggin
2010-07-27 18:13   ` Konrad Rzeszutek Wilk
2010-07-27 18:56   ` Jeremy Fitzhardinge [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4C4F2BF2.6010903@goop.org \
    --to=jeremy@goop.org \
    --cc=konrad.wilk@oracle.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=npiggin@suse.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox