public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: Avi Kivity <avi@redhat.com>
To: Andi Kleen <andi@firstfloor.org>
Cc: Andre Przywara <andre.przywara@amd.com>, kvm@vger.kernel.org
Subject: Re: [PATCH 0/3] KVM-userspace: add NUMA support for guests
Date: Sun, 30 Nov 2008 20:07:25 +0200	[thread overview]
Message-ID: <4932D65D.8000509@redhat.com> (raw)
In-Reply-To: <20081130174250.GY6703@one.firstfloor.org>

Andi Kleen wrote:
>>> I was more thinking about some heuristics that checks when a page
>>> is first mapped into user space. The only problem is that it is zeroed
>>> through the direct mapping before, but perhaps there is a way around it. 
>>> That's one of the rare cases when 32bit highmem actually makes things 
>>> easier.
>>> It might be also easier on some other OS than Linux who don't use
>>> direct mapping that aggressively.
>>>  
>>>       
>> In the context of kvm, the mmap() calls happen before the guest ever 
>>     
>
> The mmap call doesn't matter at all, what matters is when the
> page is allocated.
>
>   

The page is allocated at an uninteresting point in time.  For example, 
the boot loaded allocates a bunch of pages.

>> executes.  First access happens somewhat later, but still we cannot 
>> count on the majority of accesses to come from the same cpu as the first 
>> access.
>>     
>
> It is a reasonable heuristic. It's just like the rather
> successfull default local allocation heuristic the native kernel uses.
>   

It's very different.  The kernel expects an application that touched 
page X on node Y to continue using page X on node Y.  Because 
applications know this, they are written to this assumption.  However, 
in a virtualization context, the guest kernel expects that page X 
belongs to whatever node the SRAT table points at, without regard to the 
first access.

Guest kernels behave differently from applications, because real 
hardware doesn't allocate pages dynamically like the kernel can for 
applications.

(btw, what do you do with cpu-less nodes? I think some sgi hardware has 
them)

>>> The alternative is to keep your own pools and allocate from the
>>> correct pool, but then you either need pinning or getcpu()
>>>  
>>>       
>> This is meaningless in kvm context.  Other than small bits of memory 
>> needed for I/O and shadow page tables, the bulk of memory is allocated 
>> once. 
>>     
>
> Mapped once. Anyways that could be changed too if there was need.
>
>   

Mapped once and allocated once (not at the same time, but fairly close).

We can't change it without changing the guest.

>>> Basic algorithm:
>>> - If guest touches virtual node that is the same as the local node
>>> of the current vcpu assume it's a local allocation.
>>>  
>>>       
>> The guest is not making the same assumption; lying to the guest is 
>>     
>
> Huh? Pretty much all NUMA aware OS should. Linux will definitely.
>
>   

No.  Linux will assume a page belongs to the node the SRAT table says it 
belongs to.  Whether first access will be from the local node depends on 
the workload.  If the first application running accesses all memory from 
a single cpu, we will allocate all memory from one node, but this is wrong.

>> (2) even without npt/ept, we have no idea how often mappings are used 
>> and by which cpu.  finding out is expensive.
>>     
>
> You see a fault on the first mapping. That fault is on the CPU that
> did the access.  Therefore you know which one it was.
>   

It's meaningless information.  First access means nothing.  And again, 
the guest doesn't expect the page to move to the node where it touched it.

(we also see first access with ept)

>> (3) for many workloads, there are no unused pages.  the guest 
>> application allocates all memory and manages memory by itself.
>>     
>
> First a common case of guest using all memory is file cache,
> but for NUMA purposes file cache locality typically doesn't
> matter because it's not accessed frequently enough that
> non locality is a problem. It really only matters for mapping
> that are used often by the CPU.
>
> When a single application allocates everything and keeps it that is fine
> too because you'll give it approximately local memory on the initial
> set up (assuming the application has reasonable NUMA behaviour by itself
> on a first touch local allocation policy)
>   

Sure, for the simple cases it works.  But consider your first example 
followed by the second (you can even reboot the guest in the middle, but 
the bad assignment sticks).

And if the vcpu moves for some reason, things get screwed up permanently.

We should try to be predictable, not depend on behavior the guest has no 
real reason to follow, if it follows hardware specs.


-- 
error compiling committee.c: too many arguments to function


  reply	other threads:[~2008-11-30 18:07 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-11-27 22:23 [PATCH 0/3] KVM-userspace: add NUMA support for guests Andre Przywara
2008-11-28  8:14 ` Andi Kleen
2008-11-29 18:43   ` Avi Kivity
2008-11-29 20:10     ` Andi Kleen
2008-11-29 20:35       ` Avi Kivity
2008-11-30 15:41         ` Andi Kleen
2008-11-30 15:38           ` Avi Kivity
2008-11-30 16:05             ` Andi Kleen
2008-11-30 16:38               ` Avi Kivity
2008-11-30 17:04                 ` Andi Kleen
2008-11-30 17:11                   ` Avi Kivity
2008-11-30 17:42                     ` Andi Kleen
2008-11-30 18:07                       ` Avi Kivity [this message]
2008-11-30 18:55                         ` Andi Kleen
2008-11-30 19:11                           ` Skywing
2008-11-30 20:08                             ` Avi Kivity
2008-11-30 20:07                           ` Avi Kivity
2008-11-30 21:41                             ` Andi Kleen
2008-11-30 21:50                               ` Avi Kivity
2008-11-30 22:08                                 ` Skywing
2008-11-28 10:40 ` Daniel P. Berrange
2008-11-29 18:29 ` Avi Kivity
2008-12-01 14:15   ` Andre Przywara
2008-12-01 14:29     ` Avi Kivity
2008-12-01 15:27       ` Anthony Liguori
2008-12-01 15:34         ` Anthony Liguori
2008-12-01 15:37         ` Avi Kivity
2008-12-01 15:49           ` Anthony Liguori
2008-12-01 14:44     ` Daniel P. Berrange
2008-12-01 14:53       ` Avi Kivity
2008-12-01 15:18 ` Anthony Liguori
2008-12-01 15:35   ` Avi Kivity

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4932D65D.8000509@redhat.com \
    --to=avi@redhat.com \
    --cc=andi@firstfloor.org \
    --cc=andre.przywara@amd.com \
    --cc=kvm@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox