From: Christian Borntraeger <borntraeger@de.ibm.com>
To: Peter Xu <peterx@redhat.com>, Igor Mammedov <imammedo@redhat.com>
Cc: thuth@redhat.com, david@redhat.com, cohuck@redhat.com,
qemu-devel@nongnu.org, qemu-s390x@nongnu.org,
pbonzini@redhat.com
Subject: Re: [PATCH v7 4/4] s390: do not call memory_region_allocate_system_memory() multiple times
Date: Mon, 30 Sep 2019 09:09:59 +0200 [thread overview]
Message-ID: <63e706b4-4a6a-3be5-6bb7-9c744d269d98@de.ibm.com> (raw)
In-Reply-To: <20190928012808.GA31218@xz-x1>
On 28.09.19 03:28, Peter Xu wrote:
> On Fri, Sep 27, 2019 at 03:33:20PM +0200, Igor Mammedov wrote:
>> On Thu, 26 Sep 2019 07:52:35 +0800
>> Peter Xu <peterx@redhat.com> wrote:
>>
>>> On Wed, Sep 25, 2019 at 01:51:05PM +0200, Igor Mammedov wrote:
>>>> On Wed, 25 Sep 2019 11:27:00 +0800
>>>> Peter Xu <peterx@redhat.com> wrote:
>>>>
>>>>> On Tue, Sep 24, 2019 at 10:47:51AM -0400, Igor Mammedov wrote:
>>>>>> s390 was trying to solve limited KVM memslot size issue by abusing
>>>>>> memory_region_allocate_system_memory(), which breaks API contract
>>>>>> where the function might be called only once.
>>>>>>
>>>>>> Beside an invalid use of API, the approach also introduced migration
>>>>>> issue, since RAM chunks for each KVM_SLOT_MAX_BYTES are transferred in
>>>>>> migration stream as separate RAMBlocks.
>>>>>>
>>>>>> After discussion [1], it was agreed to break migration from older
>>>>>> QEMU for guest with RAM >8Tb (as it was relatively new (since 2.12)
>>>>>> and considered to be not actually used downstream).
>>>>>> Migration should keep working for guests with less than 8TB and for
>>>>>> more than 8TB with QEMU 4.2 and newer binary.
>>>>>> In case user tries to migrate more than 8TB guest, between incompatible
>>>>>> QEMU versions, migration should fail gracefully due to non-exiting
>>>>>> RAMBlock ID or RAMBlock size mismatch.
>>>>>>
>>>>>> Taking in account above and that now KVM code is able to split too
>>>>>> big MemorySection into several memslots, partially revert commit
>>>>>> (bb223055b s390-ccw-virtio: allow for systems larger that 7.999TB)
>>>>>> and use kvm_set_max_memslot_size() to set KVMSlot size to
>>>>>> KVM_SLOT_MAX_BYTES.
>>>>>>
>>>>>> 1) [PATCH RFC v2 4/4] s390: do not call memory_region_allocate_system_memory() multiple times
>>>>>>
>>>>>> Signed-off-by: Igor Mammedov <imammedo@redhat.com>
>>>>>
>>>>> Acked-by: Peter Xu <peterx@redhat.com>
>>>>>
>>>>> IMHO it would be good to at least mention bb223055b9 in the commit
>>>>> message even if not with a "Fixed:" tag. May be amended during commit
>>>>> if anyone prefers.
>>>>
>>>> /me confused, bb223055b9 is mentioned in commit message
>>>
>>> I'm sorry, I overlooked that.
>>>
>>>>
>>>>> Also, this only applies the split limitation to s390. Would that be a
>>>>> good thing to some other archs as well?
>>>>
>>>> Don't we have the similar bitmap size issue in KVM for other archs?
>>>
>>> Yes I thought we had. So I feel like it would be good to also allow
>>> other archs to support >8TB mem as well. Thanks,
>> Another question, Is there another archs with that much RAM that are
>> available/used in real life (if not I'd wait for demand to arise first)?
>
> I don't know, so it was a pure question besides the series. Sorry if
> that holds your series somehow, it was not my intention.
>
>>
>> If we are to generalize it to other targets, then instead of using
>> arbitrary memslot max size per target, we could just hardcode or get
>> from KVM, max supported size of bitmap and use that to calculate
>> kvm_max_slot_size depending on target page size.
>
> Right, I think if so hard code would be fine for now, and probably can
> with a smallest one across all archs (should depend on the smallest
> page size, I guess).
>
>>
>> Then there wouldn't be need for having machine specific code
>> to care about it and pick/set arbitrary values.
>>
>> Another aspect to think about if we are to enable it for
>> other targets is memslot accounting. It doesn't affect s390
>> but other targets that support memory hotplug now assume 1:1
>> relation between memoryregion:memslot, which currently holds
>> true but would need to amended in case split is enabled there.
>
> I didn't know this. So maybe it makes more sense to have s390 only
> here. Thanks,
OK. So shall I take the series as is via the s390 tree?
I would like to add the following patch on top if nobody minds:
Subject: [PATCH 1/1] s390/kvm: split kvm mem slots at 4TB
Instead of splitting at an unaligned address, we can simply split at
4TB.
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
---
target/s390x/kvm.c | 9 ++++-----
1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/target/s390x/kvm.c b/target/s390x/kvm.c
index ad2dd14f7e78..611f56f4b5ac 100644
--- a/target/s390x/kvm.c
+++ b/target/s390x/kvm.c
@@ -126,12 +126,11 @@
/*
* KVM does only support memory slots up to KVM_MEM_MAX_NR_PAGES pages
* as the dirty bitmap must be managed by bitops that take an int as
- * position indicator. If we have a guest beyond that we will split off
- * new subregions. The split must happen on a segment boundary (1MB).
+ * position indicator. This would end at an unaligned address
+ * (0x7fffff00000). As future variants might provide larger pages
+ * and to make all addresses properly aligned, let us split at 4TB.
*/
-#define KVM_MEM_MAX_NR_PAGES ((1ULL << 31) - 1)
-#define SEG_MSK (~0xfffffULL)
-#define KVM_SLOT_MAX_BYTES ((KVM_MEM_MAX_NR_PAGES * TARGET_PAGE_SIZE) & SEG_MSK)
+#define KVM_SLOT_MAX_BYTES 4096UL*1024*1024*1024
static CPUWatchpoint hw_watchpoint;
/*
--
2.21.0
next prev parent reply other threads:[~2019-09-30 7:11 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-09-24 14:47 [PATCH v7 0/4] s390: stop abusing memory_region_allocate_system_memory() Igor Mammedov
2019-09-24 14:47 ` [PATCH v7 1/4] kvm: extract kvm_log_clear_one_slot Igor Mammedov
2019-09-30 10:25 ` Christian Borntraeger
2019-09-24 14:47 ` [PATCH v7 2/4] kvm: clear dirty bitmaps from all overlapping memslots Igor Mammedov
2019-09-24 14:47 ` [PATCH v7 3/4] kvm: split too big memory section on several memslots Igor Mammedov
2019-09-25 3:12 ` Peter Xu
2019-09-25 12:09 ` Igor Mammedov
2019-09-25 23:45 ` Peter Xu
2019-09-24 14:47 ` [PATCH v7 4/4] s390: do not call memory_region_allocate_system_memory() multiple times Igor Mammedov
2019-09-25 3:27 ` Peter Xu
2019-09-25 11:51 ` Igor Mammedov
2019-09-25 23:52 ` Peter Xu
2019-09-27 13:33 ` Igor Mammedov
2019-09-28 1:28 ` Peter Xu
2019-09-30 7:09 ` Christian Borntraeger [this message]
2019-09-30 9:33 ` Igor Mammedov
2019-09-30 10:04 ` Christian Borntraeger
2019-09-30 10:35 ` Paolo Bonzini
2019-09-25 7:47 ` [PATCH v7 0/4] s390: stop abusing memory_region_allocate_system_memory() Christian Borntraeger
2019-09-30 11:00 ` Christian Borntraeger
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=63e706b4-4a6a-3be5-6bb7-9c744d269d98@de.ibm.com \
--to=borntraeger@de.ibm.com \
--cc=cohuck@redhat.com \
--cc=david@redhat.com \
--cc=imammedo@redhat.com \
--cc=pbonzini@redhat.com \
--cc=peterx@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=qemu-s390x@nongnu.org \
--cc=thuth@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).