* [Qemu-devel] [RFC PATCH v0] spapr: Abort when hash table size requirement isn't met
@ 2015-07-15 9:57 Bharata B Rao
2015-07-16 6:55 ` Bharata B Rao
0 siblings, 1 reply; 3+ messages in thread
From: Bharata B Rao @ 2015-07-15 9:57 UTC (permalink / raw)
To: qemu-devel; +Cc: Bharata B Rao, qemu-ppc, agraf, david
[This patch addresses an issue which is not prominently seen in mainline,
but seen frequently only in David's spapr-next branch. Though it is possible
to see this issue with mainline too, the current version of the patch
is intended for David's tree.]
QEMU requests for hash table allocation through KVM_PPC_ALLOCATE_HTAB ioctl
by providing the size hint via htab_shift value. Sometimes the hinted
size requirement can't be met by the host and it returns with a lower
value for htab_shift.
This was fine until recently where the hash table size was dependent
on guest RAM size. With the intention of supporting memory hotplug, hash
table size was changed to depend on maxram size recently. Since it is
typical to have maxram size to be much higher than RAM size, the possibility
of host not being able to meet the size requirement has increased. This
causes two problems:
- When memory hotplug is supported, we will not be able to grow till
maxram if the host wasn't able to satisfy the hash table size for the
the full maxram range.
- During migration, we can end up having different htab_shift values (and
hence different hash table sizes) at the source and target due to
which the migration fails.
Prevent the above conditions by refusing to start the guest if the
QEMU-requested hash table size requiement isn't met by the host.
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
---
hw/ppc/spapr.c | 14 ++++++++++++++
1 file changed, 14 insertions(+)
diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index 4a648af..6ffe198 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -1003,6 +1003,20 @@ static void spapr_reset_htab(sPAPRMachineState *spapr)
if (shift > 0) {
/* Kernel handles htab, we don't need to allocate one */
+ if (spapr->htab_shift != shift) {
+ /*
+ * Host couldn't allocate the hash table with the requested
+ * size. This can lead to two problems later:
+ * - Failure to grow till maxram_size via hotplug.
+ * - Failure to migrate if the host at the target ends up
+ * allocating a different sized hash table.
+ *
+ * Prevent such conditions by aborting now.
+ */
+ error_setg(&error_abort, "Unable to allocate hash table, try "
+ "with smaller maxmem value");
+ }
+
spapr->htab_shift = shift;
kvmppc_kern_htab = true;
--
2.1.0
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [Qemu-devel] [RFC PATCH v0] spapr: Abort when hash table size requirement isn't met
2015-07-15 9:57 [Qemu-devel] [RFC PATCH v0] spapr: Abort when hash table size requirement isn't met Bharata B Rao
@ 2015-07-16 6:55 ` Bharata B Rao
2015-07-28 5:33 ` Bharata B Rao
0 siblings, 1 reply; 3+ messages in thread
From: Bharata B Rao @ 2015-07-16 6:55 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-ppc, agraf, david
On Wed, Jul 15, 2015 at 03:27:13PM +0530, Bharata B Rao wrote:
> [This patch addresses an issue which is not prominently seen in mainline,
> but seen frequently only in David's spapr-next branch. Though it is possible
> to see this issue with mainline too, the current version of the patch
> is intended for David's tree.]
>
> QEMU requests for hash table allocation through KVM_PPC_ALLOCATE_HTAB ioctl
> by providing the size hint via htab_shift value. Sometimes the hinted
> size requirement can't be met by the host and it returns with a lower
> value for htab_shift.
>
> This was fine until recently where the hash table size was dependent
> on guest RAM size. With the intention of supporting memory hotplug, hash
> table size was changed to depend on maxram size recently. Since it is
> typical to have maxram size to be much higher than RAM size, the possibility
> of host not being able to meet the size requirement has increased. This
> causes two problems:
>
> - When memory hotplug is supported, we will not be able to grow till
> maxram if the host wasn't able to satisfy the hash table size for the
> the full maxram range.
This is a recoverable condition where the hotplug can be gracefully failed.
> - During migration, we can end up having different htab_shift values (and
> hence different hash table sizes) at the source and target due to
> which the migration fails.
One possible way to solve this is to change (reduce) the maxram_size
based on the negotiated value of htab_shit and use the changed value
of maxram_size at the target during migration. However AFAIK, currently
there is no way to communicate the changed maxram_size back to libvirt,
so this solution may not be feasible.
So it is the question of whether to allow the guest to boot with reduced
hashtable size and fail migration (this is the current behaviour)
or
As done in this patch, prevent the booting of the VM altogether.
I am leaning towards the former. Thoughts ?
Regards,
Bharata.
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [Qemu-devel] [RFC PATCH v0] spapr: Abort when hash table size requirement isn't met
2015-07-16 6:55 ` Bharata B Rao
@ 2015-07-28 5:33 ` Bharata B Rao
0 siblings, 0 replies; 3+ messages in thread
From: Bharata B Rao @ 2015-07-28 5:33 UTC (permalink / raw)
To: qemu-devel; +Cc: qemu-ppc, agraf, david
Any views on this ?
On Thu, Jul 16, 2015 at 12:25:01PM +0530, Bharata B Rao wrote:
> On Wed, Jul 15, 2015 at 03:27:13PM +0530, Bharata B Rao wrote:
> > [This patch addresses an issue which is not prominently seen in mainline,
> > but seen frequently only in David's spapr-next branch. Though it is possible
> > to see this issue with mainline too, the current version of the patch
> > is intended for David's tree.]
> >
> > QEMU requests for hash table allocation through KVM_PPC_ALLOCATE_HTAB ioctl
> > by providing the size hint via htab_shift value. Sometimes the hinted
> > size requirement can't be met by the host and it returns with a lower
> > value for htab_shift.
> >
> > This was fine until recently where the hash table size was dependent
> > on guest RAM size. With the intention of supporting memory hotplug, hash
> > table size was changed to depend on maxram size recently. Since it is
> > typical to have maxram size to be much higher than RAM size, the possibility
> > of host not being able to meet the size requirement has increased. This
> > causes two problems:
> >
> > - When memory hotplug is supported, we will not be able to grow till
> > maxram if the host wasn't able to satisfy the hash table size for the
> > the full maxram range.
>
> This is a recoverable condition where the hotplug can be gracefully failed.
>
> > - During migration, we can end up having different htab_shift values (and
> > hence different hash table sizes) at the source and target due to
> > which the migration fails.
>
> One possible way to solve this is to change (reduce) the maxram_size
> based on the negotiated value of htab_shit and use the changed value
> of maxram_size at the target during migration. However AFAIK, currently
> there is no way to communicate the changed maxram_size back to libvirt,
> so this solution may not be feasible.
>
> So it is the question of whether to allow the guest to boot with reduced
> hashtable size and fail migration (this is the current behaviour)
>
> or
>
> As done in this patch, prevent the booting of the VM altogether.
>
> I am leaning towards the former. Thoughts ?
>
> Regards,
> Bharata.
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2015-07-28 5:35 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-07-15 9:57 [Qemu-devel] [RFC PATCH v0] spapr: Abort when hash table size requirement isn't met Bharata B Rao
2015-07-16 6:55 ` Bharata B Rao
2015-07-28 5:33 ` Bharata B Rao
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).