linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@kernel.org>
To: Seth Forshee <seth.forshee@canonical.com>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: Memory hotplug regression in 4.13
Date: Fri, 29 Dec 2017 14:05:46 +0100	[thread overview]
Message-ID: <20171229130546.GD27077@dhcp22.suse.cz> (raw)
In-Reply-To: <20171222184515.GT11858@ubuntu-hedt>

On Fri 22-12-17 12:45:15, Seth Forshee wrote:
> On Fri, Dec 22, 2017 at 10:12:40AM -0600, Seth Forshee wrote:
> > On Fri, Dec 22, 2017 at 03:49:25PM +0100, Michal Hocko wrote:
> > > On Mon 18-12-17 15:53:20, Michal Hocko wrote:
> > > > On Fri 01-12-17 08:23:27, Seth Forshee wrote:
> > > > > On Mon, Sep 25, 2017 at 02:58:25PM +0200, Michal Hocko wrote:
> > > > > > On Thu 21-09-17 00:40:34, Seth Forshee wrote:
> > > > [...]
> > > > > > > It seems I don't have that kernel anymore, but I've got a 4.14-rc1 build
> > > > > > > and the problem still occurs there. It's pointing to the call to
> > > > > > > __builtin_memcpy in memcpy (include/linux/string.h line 340), which we
> > > > > > > get to via wp_page_copy -> cow_user_page -> copy_user_highpage.
> > > > > > 
> > > > > > Hmm, this is interesting. That would mean that we have successfully
> > > > > > mapped the destination page but its memory is still not accessible.
> > > > > > 
> > > > > > Right now I do not see how the patch you have bisected to could make any
> > > > > > difference because it only postponed the onlining to be independent but
> > > > > > your config simply onlines automatically so there shouldn't be any
> > > > > > semantic change. Maybe there is some sort of off-by-one or something.
> > > > > > 
> > > > > > I will try to investigate some more. Do you think it would be possible
> > > > > > to configure kdump on your system and provide me with the vmcore in some
> > > > > > way?
> > > > > 
> > > > > Sorry, I got busy with other stuff and this kind of fell off my radar.
> > > > > It came to my attention again recently though.
> > > > 
> > > > Apology on my side. This has completely fall of my radar.
> > > > 
> > > > > I was looking through the hotplug rework changes, and I noticed that
> > > > > 32-bit x86 previously was using ZONE_HIGHMEM as a default but after the
> > > > > rework it doesn't look like it's possible for memory to be associated
> > > > > with ZONE_HIGHMEM when onlining. So I made the change below against 4.14
> > > > > and am now no longer seeing the oopses.
> > > > 
> > > > Thanks a lot for debugging! Do I read the above correctly that the
> > > > current code simply returns ZONE_NORMAL and maps an unrelated pfn into
> > > > this zone and that leads to later blowups? Could you attach the fresh
> > > > boot dmesg output please?
> > > > 
> > > > > I'm sure this isn't the correct fix, but I think it does confirm that
> > > > > the problem is that the memory should be associated with ZONE_HIGHMEM
> > > > > but is not.
> > > > 
> > > > 
> > > > Yes, the fix is not quite right. HIGHMEM is not a _kernel_ memory
> > > > zone. The kernel cannot access that memory directly. It is essentially a
> > > > movable zone from the hotplug API POV. We simply do not have any way to
> > > > tell into which zone we want to online this memory range in.
> > > > Unfortunately both zones _can_ be present. It would require an explicit
> > > > configuration (movable_node and a NUMA hoptlugable nodes running in 32b
> > > > or and movable memory configured explicitly on the kernel command line).
> > > > 
> > > > The below patch is not really complete but I would rather start simple.
> > > > Maybe we do not even have to care as most 32b users will never use both
> > > > zones at the same time. I've placed a warning to learn about those.
> > > > 
> > > > Does this pass your testing?
> > > 
> > > Any chances to test this?
> > 
> > Yes, I should get to testing it soon. I'm working through a backlog of
> > things I need to get done and this just hasn't quite made it to the top.
> 
> I started by testing vanilla 4.15-rc4 with a vm that has several memory
> slots already populated at boot. With that I no longer get an oops,
> however while /sys/devices/system/memory/*/online is 1 it looks like the
> memory isn't being used. With your patch the behavior is the same. I'm
> attaching dmesg from both kernels.

What do you mean? The overal available memory doesn't match the size of
all memblocks?
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

      reply	other threads:[~2017-12-29 13:05 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-09-19 16:41 Memory hotplug regression in 4.13 Seth Forshee
2017-09-20  9:29 ` Michal Hocko
2017-09-21  5:40   ` Seth Forshee
2017-09-25 12:58     ` Michal Hocko
2017-12-01 14:23       ` Seth Forshee
2017-12-18 14:53         ` Michal Hocko
2017-12-18 17:54           ` Randy Dunlap
2017-12-22 14:49           ` Michal Hocko
2017-12-22 16:12             ` Seth Forshee
2017-12-22 18:45               ` Seth Forshee
2017-12-29 13:05                 ` Michal Hocko [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20171229130546.GD27077@dhcp22.suse.cz \
    --to=mhocko@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=seth.forshee@canonical.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).