linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Michel Lespinasse <walken@google.com>
To: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: linux-mm <linux-mm@kvack.org>
Subject: Re: RFC: reviving mlock isolation dead code
Date: Wed, 10 Nov 2010 04:21:59 -0800	[thread overview]
Message-ID: <AANLkTinrtXrwgwUXNOaM_AGin2iEMqN2wWciMzJUPUyB@mail.gmail.com> (raw)
In-Reply-To: <20101109115540.BC3F.A69D9226@jp.fujitsu.com>

On Mon, Nov 8, 2010 at 8:34 PM, KOSAKI Motohiro
<kosaki.motohiro@jp.fujitsu.com> wrote:
> While in airplane to come back from KS and LPC, I was thinking this issue. now I think
> we can solve this issue. can you please hear my idea?

I have been having similar thoughts over the past week. I'll try to
send a related patch set soon.

> Now, mlock has following call flow
>
> sys_mlock
>        down_write(mmap_sem)
>        do_mlock()
>                for-each-vma
>                        mlock_fixup()
>                                __mlock_vma_pages_range()
>                                        __get_user_pages()
>        up_write(mmap_sem)
>
> Then, I'd propose two phase mlock. that said,
>
> sys_mlock
>        down_write(mmap_sem)
>        do_mlock()
>                for-each-vma
>                        turn on VM_LOCKED and merge/split vma
>        downgrade_write(mmap_sem)
>                for-each-vma
>                        mlock_fixup()
>                                __mlock_vma_pages_range()
>        up_read(mmap_sem)
>
> Usually, kernel developers strongly dislike two phase thing beucase it's slow. but at least
> _I_ think it's ok in this case. because mlock is really really slow syscall, it often take a few
> *miniture*. then, A few microsecond slower is not big matter.
>
> What do you think?

downgrade_write() would help, but only partially. If another thread
tries to acquire the mmap_sem for write, it will get queued for a long
time until mlock() completes - this may in itself be acceptable, but
the issue here is that additional readers like try_to_unmap_one()
won't be able to acquire the mmap_sem anymore. This is because the
rwsem code prevents new readers from entering once there is a queued
writer, in order to avoid starvation.

My proposal would be as follows:

sys_mlock
       down_write(mmap_sem)
       do_mlock()
               for-each-vma
                       turn on VM_LOCKED and merge/split vma
       up_write(mmap_sem)
       for (addr = start of mlock range; addr < end of mlock range;
addr = next_addr)
               down_read(mmap_sem)
               find vma for addr
               next_addr = end of the vma
               if vma still has VM_LOCKED flag:
                       next_addr = min(next_addr, addr + few pages)
                       mlock a small batch of pages from that vma
(from addr to next_addr)
               up_read(mmap_sem)

Since a large mlock() can take a long time and we don't want to hold
mmap_sem for that long, we have to allow other threads to grab
mmap_sem and deal with the concurrency issues.

The races aren't actually too bad:

* If some other thread creates new VM_LOCKED vmas within the mlock
range while sys_mlock() is working: both threads will be trying to
mlock_fixup the same page range at once. This is no big deal as
__mlock_vma_pages_range already only needs mmap_sem held for read: the
get_user_pages() part can safely proceed in parallel and the
mlock_vma_page() part is protected by the page lock and won't do
anything if the PageMlocked flag is already set.

* If some other thread creates new non-VM_LOCKED vmas, or munlocks the
same address ranges that mlock() is currently working on: the mlock()
code needs to be careful here to not mlock the pages when the vmas
don't have the VM_LOCKED flag anymore. From the user process point of
view, things will look like if the mlock had completed first, followed
by the munlock.

The other mlock related issue I have is that it marks pages as dirty
(if they are in a writable VMA), and causes writeback to work on them,
even though the pages have not actually been modified. This looks like
it would be solvable with a new get_user_pages flag for mlock use
(breaking cow etc, but not writing to the pages just yet).

-- 
Michel "Walken" Lespinasse
A program is never fully debugged until the last user dies.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom policy in Canada: sign http://dissolvethecrtc.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2010-11-10 12:22 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-10-30 10:16 RFC: reviving mlock isolation dead code Michel Lespinasse
2010-10-30 12:48 ` Michel Lespinasse
2010-11-01  7:05 ` KOSAKI Motohiro
2010-11-09  4:34   ` KOSAKI Motohiro
2010-11-10 12:21     ` Michel Lespinasse [this message]
2010-11-14  5:07       ` KOSAKI Motohiro
2010-11-16  1:44         ` Hugh Dickins
2010-11-16  6:50           ` Michel Lespinasse
2010-11-16 23:28             ` Hugh Dickins
2010-11-18 11:16               ` Michel Lespinasse

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=AANLkTinrtXrwgwUXNOaM_AGin2iEMqN2wWciMzJUPUyB@mail.gmail.com \
    --to=walken@google.com \
    --cc=kosaki.motohiro@jp.fujitsu.com \
    --cc=linux-mm@kvack.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).