public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Peter Hurley <peter@hurleysoftware.com>
To: Andi Kleen <andi@firstfloor.org>, Lin Ming <minggr@gmail.com>
Cc: Fengguang Wu <fengguang.wu@intel.com>,
	Greg KH <gregkh@linuxfoundation.org>,
	LKML <linux-kernel@vger.kernel.org>,
	lkp@01.org, Tejun Heo <tj@kernel.org>
Subject: Re: increased vmap_area_lock contentions on "n_tty: Move buffers into n_tty_data"
Date: Thu, 26 Sep 2013 07:52:23 -0400	[thread overview]
Message-ID: <52441FF7.1030405@hurleysoftware.com> (raw)
In-Reply-To: <87r4cc8e0q.fsf@tassilo.jf.intel.com>

On 09/25/2013 11:20 PM, Andi Kleen wrote:
> Lin Ming <minggr@gmail.com> writes:
>>
>> Would you like below patch?
>
> The loop body keeps rather complex state. It could easily
> get confused by parallel RCU changes.
>
> So if the list changes in parallel you may suddenly
> report very bogus values, as the va_start - prev_end
> computation may be bogus.
>
> Perhaps it's ok (may report bogus gaps), but it seems a bit risky.

I don't understand how the computed gap would be bogus; there
_was_ a list state in which that particular gap existed. The fact
that it may not exist anymore can also happen in the existing
algorithm the instant get_vmalloc_info() drops the vmap_area_lock.

OTOH, parallel list changes could cause an rcu-based get_vmalloc_info()
to over-report or under-report used memory due to parallel list
changes.

If this is a problem in practice, then usage and largest chunk
should be tracked by the allocator instead, obviating the need for
get_vmalloc_info() to traverse the vmap_area_list at all.

Regards,
Peter Hurley

  reply	other threads:[~2013-09-26 11:52 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-09-13  0:51 increased vmap_area_lock contentions on "n_tty: Move buffers into n_tty_data" Fengguang Wu
2013-09-13  1:09 ` Fengguang Wu
2013-09-17 15:34   ` Peter Hurley
2013-09-17 23:22     ` Fengguang Wu
2013-09-18  0:22       ` Peter Hurley
2013-09-25  9:04         ` Lin Ming
2013-09-25 11:30           ` Peter Hurley
2013-09-25 14:53             ` Lin Ming
2013-09-25 16:02             ` Lin Ming
2013-09-26  3:20               ` Andi Kleen
2013-09-26 11:52                 ` Peter Hurley [this message]
2013-09-26 15:32                   ` Andi Kleen
2013-09-26 17:22                     ` Peter Hurley
2013-09-26  7:33         ` Andrew Morton
2013-09-26 11:31           ` Peter Hurley
2013-09-26 15:04             ` Greg KH
2013-09-26 17:35               ` Peter Hurley
2013-09-26 18:05                 ` Andrew Morton
2013-09-26 21:42                   ` Peter Hurley
2013-09-26 21:58                     ` Andrew Morton
2013-09-26 22:21                       ` Peter Hurley
2013-09-18  0:49   ` Peter Hurley
2013-09-13  3:17 ` Greg KH
2013-09-13  3:38   ` Fengguang Wu
2013-09-13  3:44     ` Greg KH
2013-09-13  9:55       ` Peter Hurley
2013-09-13 12:34         ` Greg KH
2013-09-17  2:42     ` Peter Hurley
2013-09-17  2:56       ` Fengguang Wu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=52441FF7.1030405@hurleysoftware.com \
    --to=peter@hurleysoftware.com \
    --cc=andi@firstfloor.org \
    --cc=fengguang.wu@intel.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lkp@01.org \
    --cc=minggr@gmail.com \
    --cc=tj@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox