From: Peter Hurley <peter@hurleysoftware.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Fengguang Wu <fengguang.wu@intel.com>,
Greg KH <gregkh@linuxfoundation.org>,
LKML <linux-kernel@vger.kernel.org>,
lkp@01.org, Tejun Heo <tj@kernel.org>
Subject: Re: increased vmap_area_lock contentions on "n_tty: Move buffers into n_tty_data"
Date: Thu, 26 Sep 2013 07:31:47 -0400 [thread overview]
Message-ID: <52441B23.7050704@hurleysoftware.com> (raw)
In-Reply-To: <20130926003315.2e81bc84.akpm@linux-foundation.org>
On 09/26/2013 03:33 AM, Andrew Morton wrote:
> On Tue, 17 Sep 2013 20:22:42 -0400 Peter Hurley <peter@hurleysoftware.com> wrote:
>
>> Looking over vmalloc.c, the critical section footprint of the vmap_area_lock
>> could definitely be reduced (even nearly eliminated), but that's a project for
>> another day :)
>
> 20bafb3d23d10 ("n_tty: Move buffers into n_tty_data") switched a
> kmalloc (which is very fast) to a vmalloc (which is very slow) without
> so much as mentioning it in the changelog. This should have been
> picked up at review, btw.
>
> Revert that part of the patch and the problem will be solved.
>
> If we are really really worried that a ~9k kmalloc might fail or will
> be slow, then implement a fallback to vmalloc() if kmalloc(GFP_NOWARN)
> failed. This kinda sucks, but is practical, but really should only be
> done if necessary - ie, if problems with using plain old kmalloc are
> demonstrable.
>
> Or just revert all of 20bafb3d23d10 - it was supposed to be a small
> performance improvement but turned out to be a significant performance
> loss. Therefore zap.
I have no particular objection to reverting the entire patch.
However, it's a mischaracterization to suggest that the reason is
because vmalloc() is very slow; without reading /proc/meminfo,
there is no performance loss.
IOW, the lock contention this patch precipitated needs to get fixed
regardless.
Regards,
Peter Hurley
next prev parent reply other threads:[~2013-09-26 11:31 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-09-13 0:51 increased vmap_area_lock contentions on "n_tty: Move buffers into n_tty_data" Fengguang Wu
2013-09-13 1:09 ` Fengguang Wu
2013-09-17 15:34 ` Peter Hurley
2013-09-17 23:22 ` Fengguang Wu
2013-09-18 0:22 ` Peter Hurley
2013-09-25 9:04 ` Lin Ming
2013-09-25 11:30 ` Peter Hurley
2013-09-25 14:53 ` Lin Ming
2013-09-25 16:02 ` Lin Ming
2013-09-26 3:20 ` Andi Kleen
2013-09-26 11:52 ` Peter Hurley
2013-09-26 15:32 ` Andi Kleen
2013-09-26 17:22 ` Peter Hurley
2013-09-26 7:33 ` Andrew Morton
2013-09-26 11:31 ` Peter Hurley [this message]
2013-09-26 15:04 ` Greg KH
2013-09-26 17:35 ` Peter Hurley
2013-09-26 18:05 ` Andrew Morton
2013-09-26 21:42 ` Peter Hurley
2013-09-26 21:58 ` Andrew Morton
2013-09-26 22:21 ` Peter Hurley
2013-09-18 0:49 ` Peter Hurley
2013-09-13 3:17 ` Greg KH
2013-09-13 3:38 ` Fengguang Wu
2013-09-13 3:44 ` Greg KH
2013-09-13 9:55 ` Peter Hurley
2013-09-13 12:34 ` Greg KH
2013-09-17 2:42 ` Peter Hurley
2013-09-17 2:56 ` Fengguang Wu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=52441B23.7050704@hurleysoftware.com \
--to=peter@hurleysoftware.com \
--cc=akpm@linux-foundation.org \
--cc=fengguang.wu@intel.com \
--cc=gregkh@linuxfoundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=lkp@01.org \
--cc=tj@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox