From: Bharata B Rao <bharata@linux.vnet.ibm.com>
To: Greg Kurz <groug@kaod.org>
Cc: qemu-devel@nongnu.org, qemu-ppc@nongnu.org,
Michael Roth <mdroth@linux.vnet.ibm.com>,
David Gibson <david@gibson.dropbear.id.au>
Subject: Re: [Qemu-devel] [PATCH] spapr: fix memory hotplug error path
Date: Tue, 4 Jul 2017 09:01:43 +0530 [thread overview]
Message-ID: <20170704033143.GA7689@in.ibm.com> (raw)
In-Reply-To: <149908449117.14256.2821600309813941055.stgit@bahia.lan>
On Mon, Jul 03, 2017 at 02:21:31PM +0200, Greg Kurz wrote:
> QEMU shouldn't abort if spapr_add_lmbs()->spapr_drc_attach() fails.
> Let's propagate the error instead, like it is done everywhere else
> where spapr_drc_attach() is called.
>
> Signed-off-by: Greg Kurz <groug@kaod.org>
> ---
> hw/ppc/spapr.c | 10 ++++++++--
> 1 file changed, 8 insertions(+), 2 deletions(-)
>
> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> index 70b3fd374e2b..e103be500189 100644
> --- a/hw/ppc/spapr.c
> +++ b/hw/ppc/spapr.c
> @@ -2601,6 +2601,7 @@ static void spapr_add_lmbs(DeviceState *dev, uint64_t addr_start, uint64_t size,
> int i, fdt_offset, fdt_size;
> void *fdt;
> uint64_t addr = addr_start;
> + Error *local_err = NULL;
>
> for (i = 0; i < nr_lmbs; i++) {
> drc = spapr_drc_by_id(TYPE_SPAPR_DRC_LMB,
> @@ -2611,7 +2612,12 @@ static void spapr_add_lmbs(DeviceState *dev, uint64_t addr_start, uint64_t size,
> fdt_offset = spapr_populate_memory_node(fdt, node, addr,
> SPAPR_MEMORY_BLOCK_SIZE);
>
> - spapr_drc_attach(drc, dev, fdt, fdt_offset, errp);
> + spapr_drc_attach(drc, dev, fdt, fdt_offset, &local_err);
> + if (local_err) {
> + g_free(fdt);
> + error_propagate(errp, local_err);
> + return;
> + }
There is some history to this. I was doing error recovery and propagation
here similarly during memory hotplug development phase until Igor
suggested that we shoudn't try to recover after we have done guest
visible changes.
Refer to "changes in v6" section in this post:
https://lists.gnu.org/archive/html/qemu-ppc/2015-06/msg00296.html
However at that time we were doing memory add by DRC index method
and hence would attach and online one LMB at a time.
In that method, if an intermediate attach fails we would end up with a few
LMBs being onlined by the guest already. However subsequently
we have switched (optionally, based on dedicated_hp_event_source) to
count-indexed method of hotplug where we do attach of all LMBs one by one
and then request the guest to hotplug all of them at once using count-indexed
method.
So it will be a bit tricky to abort for index based case and recover
correctly for count-indexed case.
Regards,
Bharata.
next prev parent reply other threads:[~2017-07-04 3:32 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-07-03 12:21 [Qemu-devel] [PATCH] spapr: fix memory hotplug error path Greg Kurz
2017-07-03 13:13 ` [Qemu-devel] [Qemu-ppc] " Daniel Henrique Barboza
2017-07-03 13:43 ` [Qemu-devel] " Igor Mammedov
2017-07-03 14:34 ` Greg Kurz
2017-07-04 3:31 ` Bharata B Rao [this message]
2017-07-04 3:50 ` Bharata B Rao
2017-07-04 8:02 ` Greg Kurz
2017-07-04 9:11 ` Bharata B Rao
2017-07-04 9:15 ` Greg Kurz
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170704033143.GA7689@in.ibm.com \
--to=bharata@linux.vnet.ibm.com \
--cc=david@gibson.dropbear.id.au \
--cc=groug@kaod.org \
--cc=mdroth@linux.vnet.ibm.com \
--cc=qemu-devel@nongnu.org \
--cc=qemu-ppc@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).