qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Fam Zheng <famz@redhat.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: jan.kiszka@siemens.com, qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] [PATCH 6/9] kvm: First step to push iothread lock out of inner run loop
Date: Tue, 23 Jun 2015 17:26:20 +0800	[thread overview]
Message-ID: <20150623092620.GA5798@ad.nay.redhat.com> (raw)
In-Reply-To: <1434646046-27150-7-git-send-email-pbonzini@redhat.com>

On Thu, 06/18 18:47, Paolo Bonzini wrote:
> From: Jan Kiszka <jan.kiszka@siemens.com>
> 
> This opens the path to get rid of the iothread lock on vmexits in KVM
> mode. On x86, the in-kernel irqchips has to be used because we otherwise
> need to synchronize APIC and other per-cpu state accesses that could be
> changed concurrently.
> 
> s390x and ARM should be fine without specific locking as their
> pre/post-run callbacks are empty. MIPS and POWER require locking for
> the pre-run callback.
> 
> Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  kvm-all.c         | 14 ++++++++++++--
>  target-i386/kvm.c | 18 ++++++++++++++++++
>  target-mips/kvm.c |  4 ++++
>  target-ppc/kvm.c  |  4 ++++
>  4 files changed, 38 insertions(+), 2 deletions(-)
> 
> diff --git a/kvm-all.c b/kvm-all.c
> index b2b1bc3..2bd8e9b 100644
> --- a/kvm-all.c
> +++ b/kvm-all.c
> @@ -1795,6 +1795,8 @@ int kvm_cpu_exec(CPUState *cpu)
>          return EXCP_HLT;
>      }
>  
> +    qemu_mutex_unlock_iothread();
> +
>      do {
>          MemTxAttrs attrs;
>  
> @@ -1813,11 +1815,9 @@ int kvm_cpu_exec(CPUState *cpu)
>               */
>              qemu_cpu_kick_self();
>          }
> -        qemu_mutex_unlock_iothread();
>  
>          run_ret = kvm_vcpu_ioctl(cpu, KVM_RUN, 0);
>  
> -        qemu_mutex_lock_iothread();
>          attrs = kvm_arch_post_run(cpu, run);
>  
>          if (run_ret < 0) {
> @@ -1836,20 +1836,24 @@ int kvm_cpu_exec(CPUState *cpu)
>          switch (run->exit_reason) {
>          case KVM_EXIT_IO:
>              DPRINTF("handle_io\n");
> +            qemu_mutex_lock_iothread();
>              kvm_handle_io(run->io.port, attrs,
>                            (uint8_t *)run + run->io.data_offset,
>                            run->io.direction,
>                            run->io.size,
>                            run->io.count);
> +            qemu_mutex_unlock_iothread();
>              ret = 0;
>              break;
>          case KVM_EXIT_MMIO:
>              DPRINTF("handle_mmio\n");
> +            qemu_mutex_lock_iothread();
>              address_space_rw(&address_space_memory,
>                               run->mmio.phys_addr, attrs,
>                               run->mmio.data,
>                               run->mmio.len,
>                               run->mmio.is_write);
> +            qemu_mutex_unlock_iothread();
>              ret = 0;
>              break;
>          case KVM_EXIT_IRQ_WINDOW_OPEN:
> @@ -1858,7 +1862,9 @@ int kvm_cpu_exec(CPUState *cpu)
>              break;
>          case KVM_EXIT_SHUTDOWN:
>              DPRINTF("shutdown\n");
> +            qemu_mutex_lock_iothread();
>              qemu_system_reset_request();
> +            qemu_mutex_unlock_iothread();
>              ret = EXCP_INTERRUPT;
>              break;
>          case KVM_EXIT_UNKNOWN:

More context:

           fprintf(stderr, "KVM: unknown exit, hardware reason %" PRIx64 "\n",
                   (uint64_t)run->hw.hardware_exit_reason);
           ret = -1;
           break;
       case KVM_EXIT_INTERNAL_ERROR:
*          ret = kvm_handle_internal_error(cpu, run);
           break;
       case KVM_EXIT_SYSTEM_EVENT:
           switch (run->system_event.type) {
           case KVM_SYSTEM_EVENT_SHUTDOWN:
*              qemu_system_shutdown_request();
               ret = EXCP_INTERRUPT;
               break;
           case KVM_SYSTEM_EVENT_RESET:
*              qemu_system_reset_request();
               ret = EXCP_INTERRUPT;
>              break;
>          default:
>              DPRINTF("kvm_arch_handle_exit\n");
> +            qemu_mutex_lock_iothread();
>              ret = kvm_arch_handle_exit(cpu, run);
> +            qemu_mutex_unlock_iothread();
>              break;
>          }
>      } while (ret == 0);
>  
> +    qemu_mutex_lock_iothread();
> +
>      if (ret < 0) {
>          cpu_dump_state(cpu, stderr, fprintf, CPU_DUMP_CODE);
>          vm_stop(RUN_STATE_INTERNAL_ERROR);

Could you explain why above three "*" calls are safe?

Fam

  parent reply	other threads:[~2015-06-23  9:26 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-06-18 16:47 [Qemu-devel] [PATCH for-2.4 0/9] KVM: Do I/O outside BQL whenever possible Paolo Bonzini
2015-06-18 16:47 ` [Qemu-devel] [PATCH 1/9] main-loop: use qemu_mutex_lock_iothread consistently Paolo Bonzini
2015-06-23 13:49   ` Frederic Konrad
2015-06-23 13:56     ` Paolo Bonzini
2015-06-23 14:18       ` Frederic Konrad
2015-06-18 16:47 ` [Qemu-devel] [PATCH 2/9] main-loop: introduce qemu_mutex_iothread_locked Paolo Bonzini
2015-06-23  8:48   ` Fam Zheng
2015-06-18 16:47 ` [Qemu-devel] [PATCH 3/9] memory: Add global-locking property to memory regions Paolo Bonzini
2015-06-23  8:51   ` Fam Zheng
2015-06-18 16:47 ` [Qemu-devel] [PATCH 4/9] exec: pull qemu_flush_coalesced_mmio_buffer() into address_space_rw/ld*/st* Paolo Bonzini
2015-06-23  9:05   ` Fam Zheng
2015-06-23  9:12     ` Paolo Bonzini
2015-06-18 16:47 ` [Qemu-devel] [PATCH 5/9] memory: let address_space_rw/ld*/st* run outside the BQL Paolo Bonzini
2015-06-24 16:56   ` Alex Bennée
2015-06-24 17:21     ` Paolo Bonzini
2015-06-24 18:50       ` Alex Bennée
2015-06-25  8:13         ` Paolo Bonzini
2015-06-18 16:47 ` [Qemu-devel] [PATCH 6/9] kvm: First step to push iothread lock out of inner run loop Paolo Bonzini
2015-06-18 18:19   ` Christian Borntraeger
2015-06-23  9:26   ` Fam Zheng [this message]
2015-06-23  9:29     ` Paolo Bonzini
2015-06-23  9:45       ` Fam Zheng
2015-06-23  9:49         ` Paolo Bonzini
2015-06-18 16:47 ` [Qemu-devel] [PATCH 7/9] kvm: Switch to unlocked PIO Paolo Bonzini
2015-06-18 16:47 ` [Qemu-devel] [PATCH 8/9] acpi: mark PMTIMER as unlocked Paolo Bonzini
2015-06-18 16:47 ` [Qemu-devel] [PATCH 9/9] kvm: Switch to unlocked MMIO Paolo Bonzini
  -- strict thread matches above, loose matches on Subject: below --
2015-06-24 16:25 [Qemu-devel] [PATCH for-2.4 v2 0/9] KVM: Do I/O outside BQL whenever possible Paolo Bonzini
2015-06-24 16:25 ` [Qemu-devel] [PATCH 6/9] kvm: First step to push iothread lock out of inner run loop Paolo Bonzini
2015-07-02  8:20 [Qemu-devel] [PATCH for-2.4 0/9 v3] KVM: Do I/O outside BQL whenever possible Paolo Bonzini
2015-07-02  8:20 ` [Qemu-devel] [PATCH 6/9] kvm: First step to push iothread lock out of inner run loop Paolo Bonzini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20150623092620.GA5798@ad.nay.redhat.com \
    --to=famz@redhat.com \
    --cc=jan.kiszka@siemens.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).