From: Christian Borntraeger <borntraeger@de.ibm.com>
To: David Hildenbrand <david@redhat.com>, Cornelia Huck <cohuck@redhat.com>
Cc: Janosch Frank <frankja@linux.vnet.ibm.com>,
Thomas Huth <thuth@redhat.com>,
qemu-devel <qemu-devel@nongnu.org>,
Alexander Graf <agraf@suse.de>,
qemu-s390x <qemu-s390x@nongnu.org>,
Halil Pasic <pasic@linux.vnet.ibm.com>,
imbrenda@linux.vnet.ibm.com, Richard Henderson <rth@twiddle.net>
Subject: Re: [Qemu-devel] [qemu-s390x] [PATCH 1/1] s390x/sclp: fix maxram calculation
Date: Mon, 30 Jul 2018 17:20:25 +0200 [thread overview]
Message-ID: <95e3d12c-12fb-c983-c0aa-e99c08321a98@de.ibm.com> (raw)
In-Reply-To: <9bdf3ea6-d0f4-d637-3e34-eb43a9821434@redhat.com>
On 07/30/2018 05:17 PM, David Hildenbrand wrote:
> On 30.07.2018 17:00, Christian Borntraeger wrote:
>>
>>
>> On 07/30/2018 04:34 PM, David Hildenbrand wrote:
>>> On 30.07.2018 16:09, Christian Borntraeger wrote:
>>>> We clamp down ram_size to match the sclp increment size. We do
>>>> not do the same for maxram_size, which means for large guests
>>>> with some sizes (e.g. -m 50000) maxram_size differs from ram_size.
>>>> This can break other code (e.g. CMMA migration) which uses maxram_size
>>>> to calculate the number of pages and then throws some errors.
>>>
>>> So the only problem is that the buffer size between source and target
>>> differ?
>>
>> The problem is that the target tries to access a non-existing buffer when
>> committing all cmma value, so the kernel returns with EFAULT.
>>>
>
> Am I wrong or does CMMA migration code really not care about which parts
> of maxram are actually used (== which memory regions are actually defined)?
>
> If so, this looks broken to me and the right fix is to use ramsize for
> now, because it simply does not support maxram.
>
> (I assume using some -m X,maxmem=X+Y would make it fail in the same way)
>
> (this patch still makes sense and should be done)
I am looking for the minimal fix for 2.13 and ideally even for 2.12.1.
Can we agree on this fix and do the remaining thing later?
next prev parent reply other threads:[~2018-07-30 15:20 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-07-30 14:09 [Qemu-devel] [PATCH 1/1] s390x/sclp: fix maxram calculation Christian Borntraeger
2018-07-30 14:34 ` David Hildenbrand
2018-07-30 15:00 ` [Qemu-devel] [qemu-s390x] " Christian Borntraeger
2018-07-30 15:17 ` David Hildenbrand
2018-07-30 15:20 ` Christian Borntraeger [this message]
2018-07-30 15:28 ` David Hildenbrand
2018-07-30 15:32 ` Cornelia Huck
2018-07-30 15:31 ` [Qemu-devel] " Christian Borntraeger
2018-07-30 16:58 ` Michael Roth
2018-07-31 6:52 ` Cornelia Huck
2018-07-31 10:48 ` Cornelia Huck
2018-07-30 15:43 ` David Hildenbrand
2018-07-30 15:47 ` Cornelia Huck
2018-07-31 8:34 ` [Qemu-devel] [qemu-s390x] " David Hildenbrand
2018-07-30 15:55 ` [Qemu-devel] " Cornelia Huck
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=95e3d12c-12fb-c983-c0aa-e99c08321a98@de.ibm.com \
--to=borntraeger@de.ibm.com \
--cc=agraf@suse.de \
--cc=cohuck@redhat.com \
--cc=david@redhat.com \
--cc=frankja@linux.vnet.ibm.com \
--cc=imbrenda@linux.vnet.ibm.com \
--cc=pasic@linux.vnet.ibm.com \
--cc=qemu-devel@nongnu.org \
--cc=qemu-s390x@nongnu.org \
--cc=rth@twiddle.net \
--cc=thuth@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).