From: Aravinda Prasad <aravinda@linux.vnet.ibm.com>
To: David Gibson <david@gibson.dropbear.id.au>
Cc: aik@au1.ibm.com, qemu-devel@nongnu.org, groug@kaod.org,
paulus@ozlabs.org, qemu-ppc@nongnu.org
Subject: Re: [Qemu-devel] [PATCH v9 3/6] target/ppc: Handle NMI guest exit
Date: Thu, 6 Jun 2019 10:15:47 +0530 [thread overview]
Message-ID: <adf9aa90-3983-e457-a092-e982d091a17e@linux.vnet.ibm.com> (raw)
In-Reply-To: <20190606014306.GF10319@umbus.fritz.box>
On Thursday 06 June 2019 07:13 AM, David Gibson wrote:
> On Wed, May 29, 2019 at 11:10:32AM +0530, Aravinda Prasad wrote:
>> Memory error such as bit flips that cannot be corrected
>> by hardware are passed on to the kernel for handling.
>> If the memory address in error belongs to guest then
>> the guest kernel is responsible for taking suitable action.
>> Patch [1] enhances KVM to exit guest with exit reason
>> set to KVM_EXIT_NMI in such cases. This patch handles
>> KVM_EXIT_NMI exit.
>>
>> [1] https://www.spinics.net/lists/kvm-ppc/msg12637.html
>> (e20bbd3d and related commits)
>>
>> Signed-off-by: Aravinda Prasad <aravinda@linux.vnet.ibm.com>
>> ---
>> hw/ppc/spapr.c | 1 +
>> hw/ppc/spapr_events.c | 23 +++++++++++++++++++++++
>> hw/ppc/spapr_rtas.c | 5 +++++
>> include/hw/ppc/spapr.h | 6 ++++++
>> target/ppc/kvm.c | 16 ++++++++++++++++
>> target/ppc/kvm_ppc.h | 2 ++
>> target/ppc/trace-events | 1 +
>> 7 files changed, 54 insertions(+)
>>
>> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
>> index fae28a9..6b6c962 100644
>> --- a/hw/ppc/spapr.c
>> +++ b/hw/ppc/spapr.c
>> @@ -1809,6 +1809,7 @@ static void spapr_machine_reset(void)
>>
>> spapr->cas_reboot = false;
>>
>> + spapr->mc_status = -1;
>> spapr->guest_machine_check_addr = -1;
>>
>> /* Signal all vCPUs waiting on this condition */
>> diff --git a/hw/ppc/spapr_events.c b/hw/ppc/spapr_events.c
>> index ae0f093..a18446b 100644
>> --- a/hw/ppc/spapr_events.c
>> +++ b/hw/ppc/spapr_events.c
>> @@ -620,6 +620,29 @@ void spapr_hotplug_req_remove_by_count_indexed(SpaprDrcType drc_type,
>> RTAS_LOG_V6_HP_ACTION_REMOVE, drc_type, &drc_id);
>> }
>>
>> +void spapr_mce_req_event(PowerPCCPU *cpu, bool recovered)
>> +{
>> + SpaprMachineState *spapr = SPAPR_MACHINE(qdev_get_machine());
>
> You ignore the 'recovered' parameter, is that right?
I use the "recovered" parameter, but in the next patch. This was left
out when the patch was split in one of the earlier versions. Will modify it.
>
>> + while (spapr->mc_status != -1) {
>> + /*
>> + * Check whether the same CPU got machine check error
>> + * while still handling the mc error (i.e., before
>> + * that CPU called "ibm,nmi-interlock")
>> + */
>> + if (spapr->mc_status == cpu->vcpu_id) {
>> + qemu_system_guest_panicked(NULL);
>> + return;
>> + }
>> + qemu_cond_wait_iothread(&spapr->mc_delivery_cond);
>> + /* Meanwhile if the system is reset, then just return */
>> + if (spapr->guest_machine_check_addr == -1) {
>> + return;
>> + }
>> + }
>> + spapr->mc_status = cpu->vcpu_id;
>> +}
>> +
>> static void check_exception(PowerPCCPU *cpu, SpaprMachineState *spapr,
>> uint32_t token, uint32_t nargs,
>> target_ulong args,
>> diff --git a/hw/ppc/spapr_rtas.c b/hw/ppc/spapr_rtas.c
>> index e7509cf..e0bdfc8 100644
>> --- a/hw/ppc/spapr_rtas.c
>> +++ b/hw/ppc/spapr_rtas.c
>> @@ -379,6 +379,11 @@ static void rtas_ibm_nmi_interlock(PowerPCCPU *cpu,
>> /* NMI register not called */
>> rtas_st(rets, 0, RTAS_OUT_PARAM_ERROR);
>> } else {
>> + /*
>> + * vCPU issuing "ibm,nmi-interlock" is done with NMI handling,
>> + * hence unset mc_status.
>> + */
>> + spapr->mc_status = -1;
>> qemu_cond_signal(&spapr->mc_delivery_cond);
>> rtas_st(rets, 0, RTAS_OUT_SUCCESS);
>> }
>> diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h
>> index 9dc5e30..fc3a776 100644
>> --- a/include/hw/ppc/spapr.h
>> +++ b/include/hw/ppc/spapr.h
>> @@ -190,6 +190,11 @@ struct SpaprMachineState {
>>
>> /* State related to "ibm,nmi-register" and "ibm,nmi-interlock" calls */
>> target_ulong guest_machine_check_addr;
>> + /*
>> + * mc_status is set to -1 if mc is not in progress, else is set to the CPU
>> + * handling the mc.
>> + */
>> + int mc_status;
>> QemuCond mc_delivery_cond;
>>
>> /*< public >*/
>> @@ -793,6 +798,7 @@ void spapr_clear_pending_events(SpaprMachineState *spapr);
>> int spapr_max_server_number(SpaprMachineState *spapr);
>> void spapr_store_hpte(PowerPCCPU *cpu, hwaddr ptex,
>> uint64_t pte0, uint64_t pte1);
>> +void spapr_mce_req_event(PowerPCCPU *cpu, bool recovered);
>>
>> /* DRC callbacks. */
>> void spapr_core_release(DeviceState *dev);
>> diff --git a/target/ppc/kvm.c b/target/ppc/kvm.c
>> index 3bf0a46..39f1a73 100644
>> --- a/target/ppc/kvm.c
>> +++ b/target/ppc/kvm.c
>> @@ -1761,6 +1761,11 @@ int kvm_arch_handle_exit(CPUState *cs, struct kvm_run *run)
>> ret = 0;
>> break;
>>
>> + case KVM_EXIT_NMI:
>> + trace_kvm_handle_nmi_exception();
>> + ret = kvm_handle_nmi(cpu, run);
>> + break;
>> +
>> default:
>> fprintf(stderr, "KVM: unknown exit reason %d\n", run->exit_reason);
>> ret = -1;
>> @@ -2844,6 +2849,17 @@ int kvm_arch_msi_data_to_gsi(uint32_t data)
>> return data & 0xffff;
>> }
>>
>> +int kvm_handle_nmi(PowerPCCPU *cpu, struct kvm_run *run)
>> +{
>> + bool recovered = run->flags & KVM_RUN_PPC_NMI_DISP_FULLY_RECOV;
>> +
>> + cpu_synchronize_state(CPU(cpu));
>> +
>> + spapr_mce_req_event(cpu, recovered);
>
> Urgh.. KVM calling directly into spapr code isn't conceptually
> correct, although it's usually kind of ok in practice. I guess we
> already do it for hypercalls, so this is no worse. If we ever have
> NMI events for KVM PR or BookE KVM which could happen for non-PAPR
> guests, I guess we'll need to re-examine this.
ok.
>
>> + return 0;
>> +}
>> +
>> int kvmppc_enable_hwrng(void)
>> {
>> if (!kvm_enabled() || !kvm_check_extension(kvm_state, KVM_CAP_PPC_HWRNG)) {
>> diff --git a/target/ppc/kvm_ppc.h b/target/ppc/kvm_ppc.h
>> index 45776ca..18693f1 100644
>> --- a/target/ppc/kvm_ppc.h
>> +++ b/target/ppc/kvm_ppc.h
>> @@ -81,6 +81,8 @@ bool kvmppc_hpt_needs_host_contiguous_pages(void);
>> void kvm_check_mmu(PowerPCCPU *cpu, Error **errp);
>> void kvmppc_set_reg_ppc_online(PowerPCCPU *cpu, unsigned int online);
>>
>> +int kvm_handle_nmi(PowerPCCPU *cpu, struct kvm_run *run);
>> +
>> #else
>>
>> static inline uint32_t kvmppc_get_tbfreq(void)
>> diff --git a/target/ppc/trace-events b/target/ppc/trace-events
>> index 3dc6740..6d15aa9 100644
>> --- a/target/ppc/trace-events
>> +++ b/target/ppc/trace-events
>> @@ -28,3 +28,4 @@ kvm_handle_papr_hcall(void) "handle PAPR hypercall"
>> kvm_handle_epr(void) "handle epr"
>> kvm_handle_watchdog_expiry(void) "handle watchdog expiry"
>> kvm_handle_debug_exception(void) "handle debug exception"
>> +kvm_handle_nmi_exception(void) "handle NMI exception"
>>
>
--
Regards,
Aravinda
next prev parent reply other threads:[~2019-06-06 4:47 UTC|newest]
Thread overview: 44+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-05-29 5:40 [Qemu-devel] [PATCH v9 0/6] target-ppc/spapr: Add FWNMI support in QEMU for PowerKVM guests Aravinda Prasad
2019-05-29 5:40 ` [Qemu-devel] [PATCH v9 1/6] ppc: spapr: Handle "ibm, nmi-register" and "ibm, nmi-interlock" RTAS calls Aravinda Prasad
2019-06-03 10:12 ` Greg Kurz
2019-06-03 11:17 ` [Qemu-devel] [Qemu-ppc] " Greg Kurz
2019-06-04 6:08 ` Aravinda Prasad
2019-06-04 14:50 ` Greg Kurz
2019-06-06 5:17 ` Aravinda Prasad
2019-06-06 1:35 ` David Gibson
2019-06-06 4:39 ` Aravinda Prasad
2019-06-06 1:34 ` [Qemu-devel] " David Gibson
2019-06-06 5:26 ` [Qemu-devel] [Qemu-ppc] " Aravinda Prasad
2019-05-29 5:40 ` [Qemu-devel] [PATCH v9 2/6] Wrapper function to wait on condition for the main loop mutex Aravinda Prasad
2019-05-29 5:40 ` [Qemu-devel] [PATCH v9 3/6] target/ppc: Handle NMI guest exit Aravinda Prasad
2019-06-03 11:53 ` [Qemu-devel] [Qemu-ppc] " Greg Kurz
2019-06-06 1:43 ` [Qemu-devel] " David Gibson
2019-06-06 4:45 ` Aravinda Prasad [this message]
2019-05-29 5:40 ` [Qemu-devel] [PATCH v9 4/6] target/ppc: Build rtas error log upon an MCE Aravinda Prasad
2019-06-03 14:00 ` Greg Kurz
2019-06-04 6:29 ` Aravinda Prasad
2019-06-04 9:01 ` Greg Kurz
2019-06-04 10:10 ` [Qemu-devel] [Qemu-ppc] " Aravinda Prasad
2019-06-06 2:58 ` [Qemu-devel] " David Gibson
2019-06-06 2:57 ` David Gibson
2019-05-29 5:40 ` [Qemu-devel] [PATCH v9 5/6] ppc: spapr: Enable FWNMI capability Aravinda Prasad
2019-06-03 15:25 ` [Qemu-devel] [Qemu-ppc] " Greg Kurz
2019-06-04 6:45 ` Aravinda Prasad
2019-06-06 3:02 ` David Gibson
2019-06-06 4:50 ` Aravinda Prasad
2019-06-06 7:51 ` Aravinda Prasad
2019-06-06 3:00 ` [Qemu-devel] " David Gibson
2019-05-29 5:40 ` [Qemu-devel] [PATCH v9 6/6] migration: Include migration support for machine check handling Aravinda Prasad
2019-06-03 15:40 ` [Qemu-devel] [Qemu-ppc] " Greg Kurz
2019-06-04 7:04 ` Aravinda Prasad
2019-06-04 20:04 ` Greg Kurz
2019-06-04 20:11 ` Greg Kurz
2019-06-06 3:06 ` [Qemu-devel] " David Gibson
2019-06-06 6:06 ` Greg Kurz
2019-06-06 11:15 ` Aravinda Prasad
2019-06-06 12:10 ` Greg Kurz
2019-06-07 0:22 ` David Gibson
2019-06-07 10:30 ` Greg Kurz
2019-06-06 11:25 ` Aravinda Prasad
2019-06-06 12:24 ` Greg Kurz
2019-06-07 0:23 ` David Gibson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=adf9aa90-3983-e457-a092-e982d091a17e@linux.vnet.ibm.com \
--to=aravinda@linux.vnet.ibm.com \
--cc=aik@au1.ibm.com \
--cc=david@gibson.dropbear.id.au \
--cc=groug@kaod.org \
--cc=paulus@ozlabs.org \
--cc=qemu-devel@nongnu.org \
--cc=qemu-ppc@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).