From: Marcelo Tosatti <mtosatti@redhat.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: drjones@redhat.com, ehabkost@redhat.com, lersek@redhat.com,
qemu-devel@nongnu.org, lcapitulino@redhat.com, bsd@redhat.com,
anthony@codemonkey.ws, hutao@cn.fujitsu.com,
y-goto@jp.fujitsu.com, peter.huangpeng@huawei.com,
afaerber@suse.de, Wanlong Gao <gaowanlong@cn.fujitsu.com>
Subject: Re: [Qemu-devel] [PATCH V17 00/11] Add support for binding guest numa nodes to host numa nodes
Date: Mon, 9 Dec 2013 16:10:33 -0200 [thread overview]
Message-ID: <20131209181032.GA8315@amt.cnet> (raw)
In-Reply-To: <52A5FEF5.1010504@redhat.com>
On Mon, Dec 09, 2013 at 06:33:41PM +0100, Paolo Bonzini wrote:
> Il 06/12/2013 19:49, Marcelo Tosatti ha scritto:
> >> > You'll have with your patches (without them it's worse of course):
> >> >
> >> > RAM offset physical address node 0
> >> > 0-3840M 0-3840M host node 0
> >> > 4096M-4352M 4096M-4352M host node 0
> >> > 4352M-8192M 4352M-8192M host node 1
> >> > 3840M-4096M 8192M-8448M host node 1
> >> >
> >> > So only 0-3G and 5-8G are aligned, 3G-5G and 8G-8.25G cannot use
> >> > gigabyte pages because they are split across host nodes.
> > AFAIK the TLB caches virt->phys translations, why specifics of
> > a given phys address is a factor into TLB caching?
>
> The problem is that "-numa mem" receives memory sizes and these do not
> take into account the hole below 4G.
>
> Thus, two adjacent host-physical addresses (two adjacent ram_addr_t-s)
> map to very far guest-physical addresses, are assigned to different
> guest nodes, and from there to different host nodes. In the above
> example this happens for 3G-5G.
Physical address which is what the TLB uses does not take node
information into account.
> On second thought, this is not particularly important, or at least not
> yet. It's not really possible to control the NUMA policy for
> hugetlbfs-allocated memory, right?
It is possible. I don't know what happens if conflicting NUMA policies
are specified for different virtual address ranges that map to a single
huge page.
In whatever way that is resolved by the kernel, it is not relevant since the TLB
caches phys->virt translations and not {phys, node info}->virt
translations.
> >> > So rather than your patches, it seems simpler to just widen the PCI hole
> >> > to 1G for i440FX and 2G for q35.
> >> >
> >> > What do you think?
> >
> > Problem is its a guest visible change. To get 1GB TLB entries with
> > "legacy guest visible machine types" (which require new machine types
> > at the host side, but invisible to guest), that won't work.
> > Windows registration invalidation etc.
>
> Yeah, that's a tradeoff to make.
Perhaps increasing the PCI hole size should be done for other reasons?
Note that dropping the 1GB alignment piix.c patch requires the hole size
+ start to be 1G aligned.
next prev parent reply other threads:[~2013-12-09 18:12 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-12-04 7:58 [Qemu-devel] [PATCH V17 00/11] Add support for binding guest numa nodes to host numa nodes Wanlong Gao
2013-12-04 7:58 ` [Qemu-devel] [PATCH V17 01/11] NUMA: move numa related code to new file numa.c Wanlong Gao
2013-12-10 13:06 ` Eduardo Habkost
2013-12-04 7:58 ` [Qemu-devel] [PATCH V17 02/11] NUMA: check if the total numa memory size is equal to ram_size Wanlong Gao
2013-12-10 13:15 ` Eduardo Habkost
2013-12-10 18:03 ` Paolo Bonzini
2013-12-10 19:01 ` Eduardo Habkost
2013-12-11 12:26 ` Daniel P. Berrange
2013-12-04 7:58 ` [Qemu-devel] [PATCH V17 03/11] NUMA: Add numa_info structure to contain numa nodes info Wanlong Gao
2013-12-04 7:58 ` [Qemu-devel] [PATCH V17 04/11] NUMA: convert -numa option to use OptsVisitor Wanlong Gao
2013-12-04 7:58 ` [Qemu-devel] [PATCH V17 05/11] NUMA: introduce NumaMemOptions Wanlong Gao
2013-12-04 7:58 ` [Qemu-devel] [PATCH V17 06/11] NUMA: add "-numa mem," options Wanlong Gao
2013-12-04 7:58 ` [Qemu-devel] [PATCH V17 07/11] NUMA: expand MAX_NODES from 64 to 128 Wanlong Gao
2013-12-04 7:58 ` [Qemu-devel] [PATCH V17 08/11] NUMA: parse guest numa nodes memory policy Wanlong Gao
2013-12-04 7:58 ` [Qemu-devel] [PATCH V17 09/11] NUMA: set " Wanlong Gao
2013-12-04 7:58 ` [Qemu-devel] [PATCH V17 10/11] NUMA: add qmp command query-numa Wanlong Gao
2013-12-04 7:58 ` [Qemu-devel] [PATCH V17 11/11] NUMA: convert hmp command info_numa to use qmp command query_numa Wanlong Gao
2013-12-06 9:06 ` [Qemu-devel] [PATCH V17 00/11] Add support for binding guest numa nodes to host numa nodes Paolo Bonzini
2013-12-06 9:31 ` Wanlong Gao
2013-12-06 9:48 ` Paolo Bonzini
2013-12-09 18:16 ` Eduardo Habkost
2013-12-09 18:26 ` Paolo Bonzini
2013-12-06 9:06 ` Paolo Bonzini
2013-12-06 18:49 ` Marcelo Tosatti
2013-12-09 17:33 ` Paolo Bonzini
2013-12-09 18:10 ` Marcelo Tosatti [this message]
2013-12-09 18:26 ` Paolo Bonzini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20131209181032.GA8315@amt.cnet \
--to=mtosatti@redhat.com \
--cc=afaerber@suse.de \
--cc=anthony@codemonkey.ws \
--cc=bsd@redhat.com \
--cc=drjones@redhat.com \
--cc=ehabkost@redhat.com \
--cc=gaowanlong@cn.fujitsu.com \
--cc=hutao@cn.fujitsu.com \
--cc=lcapitulino@redhat.com \
--cc=lersek@redhat.com \
--cc=pbonzini@redhat.com \
--cc=peter.huangpeng@huawei.com \
--cc=qemu-devel@nongnu.org \
--cc=y-goto@jp.fujitsu.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).