netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Paolo Bonzini <pbonzini@redhat.com>
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>,
	linux-kernel@vger.kernel.org, kvm@vger.kernel.org,
	netdev@vger.kernel.org
Subject: Re: [PATCH] vhost: support upto 509 memory regions
Date: Tue, 17 Feb 2015 15:11:13 +0100	[thread overview]
Message-ID: <54E34C01.5060304@redhat.com> (raw)
In-Reply-To: <20150217132931.GB6362@redhat.com>



On 17/02/2015 14:29, Michael S. Tsirkin wrote:
> On Tue, Feb 17, 2015 at 02:11:37PM +0100, Paolo Bonzini wrote:
>>
>>
>> On 17/02/2015 13:32, Michael S. Tsirkin wrote:
>>> On Tue, Feb 17, 2015 at 11:59:48AM +0100, Paolo Bonzini wrote:
>>>>
>>>>
>>>> On 17/02/2015 10:02, Michael S. Tsirkin wrote:
>>>>>> Increasing VHOST_MEMORY_MAX_NREGIONS from 65 to 509
>>>>>> to match KVM_USER_MEM_SLOTS fixes issue for vhost-net.
>>>>>>
>>>>>> Signed-off-by: Igor Mammedov <imammedo@redhat.com>
>>>>>
>>>>> This scares me a bit: each region is 32byte, we are talking
>>>>> a 16K allocation that userspace can trigger.
>>>>
>>>> What's bad with a 16K allocation?
>>>
>>> It fails when memory is fragmented.
>>
>> If memory is _that_ fragmented I think you have much bigger problems
>> than vhost.
>>
>>> I'm guessing kvm doesn't do memory scans on data path, vhost does.
>>
>> It does for MMIO memory-to-memory writes, but that's not a particularly
>> fast path.
>>
>> KVM doesn't access the memory map on fast paths, but QEMU does, so I
>> don't think it's beyond the expectations of the kernel.
> 
> QEMU has an elaborate data structure to deal with that.

It's not elaborate, it's just a radix tree.  The complicated part is
building the flat view and computing what changed in the memory map, but
none of this would have to be done in vhost.  vhost gets the flat memory
map in VHOST_SET_MEM_TABLE.

A lookup is basically:

#define LOG_TRIE_WIDTH      (PAGE_SHIFT - LOG_BITS_PER_LONG)

unsigned long node_val = (unsigned long) trie_root;
/* log of highest valid address in the memory map */
if (addr & (-1U << vhost_address_space_bits))
	return NULL;

addr <<= 64 - vhost_address_space_bits;
do {
	struct memmap_trie_node *node;
	unsigned i = addr >> (64 - LOG_TRIE_WIDTH);
	addr = addr << LOG_TRIE_WIDTH;
	node = (struct memmap_trie_node *) (node_val - 1);
	node_val = (unsigned long) node[i];
} while (node_val & 1);
return (struct vhost_mem_slot *)node_val;

bit 0: 0 if leaf

if leaf:
	bits 1-63: pointer to mem table entry
if not leaf:
	bits 1-63: pointer to next level

>>  For example you
>> can use a radix tree (not lib/radix-tree.c unfortunately), and cache
>> GVA->HPA translations if it turns out that lookup has become a hot path.
> 
> All vhost lookups are hot path.

What % is lookup vs the networking stuff?  Also, adding a simple MRU
cache might make lookups less prominent in the profile.

>> The addressing space of x86 is in practice 44 bits or fewer, and each
>> slot will typically be at least 1 GiB, so you only have 14 bits to
>> dispatch on.   It's probably possible to only have two or three levels
>> in the radix tree in the common case, and beat the linear scan real quick.
> 
> Not if there are about 6 regions, I think.

It depends on many factors including branch prediction, MRU cache hits, etc.

> Increasing the number might be reasonable for workloads such as nested
> virt.

Why does nested virt matter?

Paolo

  reply	other threads:[~2015-02-17 14:11 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-02-13 15:49 [PATCH] vhost: support upto 509 memory regions Igor Mammedov
2015-02-17  9:02 ` Michael S. Tsirkin
2015-02-17 10:59   ` Paolo Bonzini
2015-02-17 12:32     ` Michael S. Tsirkin
2015-02-17 13:11       ` Paolo Bonzini
2015-02-17 13:29         ` Michael S. Tsirkin
2015-02-17 14:11           ` Paolo Bonzini [this message]
2015-02-17 15:02           ` Igor Mammedov
2015-02-17 17:09             ` Paolo Bonzini
2015-02-17 14:44       ` Igor Mammedov
2015-02-17 14:45         ` Paolo Bonzini
2015-02-18  0:53       ` Eric Northup
2015-02-18  4:27         ` Michael S. Tsirkin
2015-05-18 16:22           ` Andrey Korolyov
2015-05-18 16:28             ` Michael S. Tsirkin
2015-05-19 11:50             ` Igor Mammedov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=54E34C01.5060304@redhat.com \
    --to=pbonzini@redhat.com \
    --cc=imammedo@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mst@redhat.com \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).