From: Yang Shi <yang.shi@linux.alibaba.com>
To: Linus Torvalds <torvalds@linux-foundation.org>
Cc: rong.a.chen@intel.com, vbabka@suse.cz,
kirill.shutemov@linux.intel.com, mhocko@kernel.org,
Matthew Wilcox <willy@infradead.org>,
ldufour@linux.vnet.ibm.com,
Andrew Morton <akpm@linux-foundation.org>,
Colin King <colin.king@canonical.com>,
Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
lkp@01.org
Subject: Re: [LKP] [mm] 9bc8039e71: will-it-scale.per_thread_ops -64.1% regression
Date: Mon, 5 Nov 2018 12:17:59 -0800 [thread overview]
Message-ID: <51265121-6e54-ff3a-cdfa-e5a2b838268d@linux.alibaba.com> (raw)
In-Reply-To: <CAHk-=whr2Aio3R49TVWqW3es6heyxXDuxGHcv8Bcc=_kw4vDeQ@mail.gmail.com>
On 11/5/18 10:35 AM, Linus Torvalds wrote:
> On Mon, Nov 5, 2018 at 10:28 AM Yang Shi <yang.shi@linux.alibaba.com> wrote:
>> Actually, the commit is mainly for optimizing the long stall time caused
>> by holding mmap_sem by write when unmapping or shrinking large mapping.
>> It downgrades write mmap_sem to read when zapping pages. So, it looks
>> the downgrade incurs more context switches. This is kind of expected.
>>
>> However, the test looks just shrink the mapping with one normal 4K page
>> size. It sounds the overhead of context switches outpace the gain in
>> this case at the first glance.
> I'm not seeing why there should be a context switch in the first place.
>
> Even if you have lots of concurrent brk() users, they should all block
> exactly the same way as before (a write lock blocks against a write
> lock, but it *also* blocks against a downgraded read lock).
Yes, it is true. The brk() users will not get waken up. What I can think
of for now is there might be other helper processes and/or kernel
threads are waiting for read mmap_sem. They might get waken up by the
downgrade.
But, I also saw huge increase in cpu idle time and sched_goidle events.
Not have clue yet for why idle goes up.
20610709 ± 15% +2376.0% 5.103e+08 ± 34% cpuidle.C1.time
28753819 ± 39% +1054.5% 3.319e+08 ± 49% cpuidle.C3.time
175049 ± 72% +840.7% 1646720 ± 72% sched_debug.cpu.sched_goidle.stddev
Thanks,
Yang
>
> So no, I don't want just some limit to hide this problem for that
> particular test. There's something else going on.
>
> Linus
next prev parent reply other threads:[~2018-11-05 20:18 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-11-05 5:08 [LKP] [mm] 9bc8039e71: will-it-scale.per_thread_ops -64.1% regression kernel test robot
2018-11-05 17:50 ` Linus Torvalds
2018-11-05 18:28 ` Yang Shi
2018-11-05 18:35 ` Linus Torvalds
2018-11-05 20:17 ` Yang Shi [this message]
2018-11-05 20:09 ` Vlastimil Babka
2018-11-05 22:14 ` Linus Torvalds
2018-11-05 22:40 ` Waiman Long
2018-12-28 1:31 ` Wang, Kemi
2018-12-28 2:55 ` Waiman Long
2018-12-28 2:55 ` kemi
2019-01-31 0:06 ` Tim Chen
2019-01-31 2:54 ` Waiman Long
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=51265121-6e54-ff3a-cdfa-e5a2b838268d@linux.alibaba.com \
--to=yang.shi@linux.alibaba.com \
--cc=akpm@linux-foundation.org \
--cc=colin.king@canonical.com \
--cc=kirill.shutemov@linux.intel.com \
--cc=ldufour@linux.vnet.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=lkp@01.org \
--cc=mhocko@kernel.org \
--cc=rong.a.chen@intel.com \
--cc=torvalds@linux-foundation.org \
--cc=vbabka@suse.cz \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox