From: NeilBrown <neilb@suse.com>
To: "J. Bruce Fields" <bfields@redhat.com>,
kernel test robot <rong.a.chen@intel.com>
Cc: Jeff Layton <jlayton@kernel.org>,
LKML <linux-kernel@vger.kernel.org>,
Jeff Layton <jlayton@redhat.com>,
lkp@01.org, linux-nfs@vger.kernel.org
Subject: Re: [LKP] [fs/locks] 83b381078b: will-it-scale.per_thread_ops -62.5% regression
Date: Wed, 28 Nov 2018 10:20:34 +1100 [thread overview]
Message-ID: <87mupup0ot.fsf@notabene.neil.brown.name> (raw)
In-Reply-To: <20181127174315.GA29963@parsley.fieldses.org>
[-- Attachment #1: Type: text/plain, Size: 8850 bytes --]
On Tue, Nov 27 2018, J. Bruce Fields wrote:
> Thanks for the report!
Yes, thanks. I thought I had replied to the previous report of a similar
problem, but I didn't actually send that email - oops.
Though the test is the same and the regression similar, this is a
different patch. The previous report identified
fs/locks: allow a lock request to block other requests
this one identifies
fs/locks: always delete_block after waiting.
Both cause blocked_lock_lock to be taken more often.
In one case is it due to locks_move_blocks(). That can probably be
optimised to skip the lock if list_empty(&fl->fl_blocked_requests).
I'd need to double-check, but I think that is safe to check without
locking.
This one causes locks_delete_blocks() to be called more often. We now
call it even if no waiting happened at all. I suspect we can test for
that and avoid it. I'll have a look.
>
> On Tue, Nov 27, 2018 at 02:01:02PM +0800, kernel test robot wrote:
>> FYI, we noticed a -62.5% regression of will-it-scale.per_thread_ops due to commit:
>>
>>
>> commit: 83b381078b5ecab098ebf6bc9548bb32af1dbf31 ("fs/locks: always delete_block after waiting.")
>> https://git.kernel.org/cgit/linux/kernel/git/jlayton/linux.git locks-next
>>
>> in testcase: will-it-scale
>> on test machine: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 64G memory
>> with following parameters:
>>
>> nr_task: 16
>> mode: thread
>> test: lock1
>
> So I guess it's doing this, uncontended file lock/unlock?:
>
> https://github.com/antonblanchard/will-it-scale/blob/master/tests/lock1.c
>
> Each thread is repeatedly locking and unlocking a file that is only used
> by that thread.
Thanks for identifying that Bruce.
This would certainly be a case where locks_delete_block() is now being
called when it wasn't before.
>
> By the way, what's the X-axis on these graphs? (Or the y-axis, for that
> matter?)
A key would help. I think the X-axis is number-of-threads. y-axis
might be ops-per-second ??.
Thanks,
NeilBrown
>
> --b.
>
>> will-it-scale.per_thread_ops
>>
>> 450000 +-+----------------------------------------------------------------+
>> | |
>> 400000 +-+ +..+.. .+..+.. .+..+..+...+..+..+.. +.. .+.. ..|
>> 350000 +-+ .. +. +. .. +. +..+ |
>> | + + + : |
>> 300000 +-+ : : |
>> 250000 +-+ : : |
>> | : : |
>> 200000 +-+ : : |
>> 150000 +-+ : : |
>> O O O O O O O O O O O O O O O O O :O: O O O O O
>> 100000 +-+ : : |
>> 50000 +-+ : : |
>> | : |
>> 0 +-+----------------------------------------------------------------+
>>
>>
>> will-it-scale.workload
>>
>> 7e+06 +-+-----------------------------------------------------------------+
>> | +...+.. .+..+..+ + +.. |
>> 6e+06 +-+ +..+.. .. .+..+..+. + + + .. ..|
>> | .. + +. + + + + +..+ |
>> 5e+06 +-++ + + : |
>> | : : |
>> 4e+06 +-+ : : |
>> | : : |
>> 3e+06 +-+ : : |
>> | O O : : O O |
>> 2e+06 O-+O O O O O O O O O O O O O O : O: O O O
>> | : : |
>> 1e+06 +-+ : : |
>> | : |
>> 0 +-+-----------------------------------------------------------------+
>>
>>
>> will-it-scale.time.user_time
>>
>> 250 +-+-------------------------------------------------------------------+
>> | .+.. .+.. +.. |
>> |.. +...+.. .+. .+...+..+..+. +.. +.. .. . ..|
>> 200 +-+ .. +. +. . .. + +..+ |
>> | + + + : |
>> | : : |
>> 150 +-+ : : |
>> | : : |
>> 100 +-+ : : |
>> | O O : : |
>> O O O O O O O O O O O O O O O :O: O O O O O
>> 50 +-+ : : |
>> | : : |
>> | : |
>> 0 +-+-------------------------------------------------------------------+
>>
>>
>> will-it-scale.time.system_time
>>
>> 5000 +-+------------------------------------------------------------------+
>> 4500 O-+O..O..O...O..O..O..O..O..O..O...O..O..O..O..O..O O O...O..O..O..O
>> | : : |
>> 4000 +-+ : : |
>> 3500 +-+ : : |
>> | : : |
>> 3000 +-+ : : |
>> 2500 +-+ : : |
>> 2000 +-+ : : |
>> | : : |
>> 1500 +-+ : : |
>> 1000 +-+ : : |
>> | : |
>> 500 +-+ : |
>> 0 +-+------------------------------------------------------------------+
>>
>>
>> [*] bisect-good sample
>> [O] bisect-bad sample
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 832 bytes --]
next prev parent reply other threads:[~2018-11-27 23:20 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20181127060102.GF6163@shao2-debian>
2018-11-27 17:43 ` [LKP] [fs/locks] 83b381078b: will-it-scale.per_thread_ops -62.5% regression J. Bruce Fields
2018-11-27 23:20 ` NeilBrown [this message]
2018-11-28 0:53 ` [PATCH] locks: fix performance regressions NeilBrown
2018-11-28 9:17 ` kernel test robot
2018-11-28 11:37 ` Jeff Layton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87mupup0ot.fsf@notabene.neil.brown.name \
--to=neilb@suse.com \
--cc=bfields@redhat.com \
--cc=jlayton@kernel.org \
--cc=jlayton@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-nfs@vger.kernel.org \
--cc=lkp@01.org \
--cc=rong.a.chen@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox