qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Igor Mammedov <imammedo@redhat.com>
To: qemu-devel@nongnu.org
Cc: mst@redhat.com, pbonzini@redhat.com, peterx@redhat.com,
	david@redhat.com, philmd@linaro.org, mtosatti@redhat.com
Subject: [PATCH v3 10/10] tcg: move interrupt caching and single step masking closer to user
Date: Fri,  8 Aug 2025 14:01:37 +0200	[thread overview]
Message-ID: <20250808120137.2208800-11-imammedo@redhat.com> (raw)
In-Reply-To: <20250808120137.2208800-1-imammedo@redhat.com>

in cpu_handle_interrupt() the only place where cached interrupt_request
might have effect is when CPU_INTERRUPT_SSTEP_MASK applied and
cached interrupt_request handed over to cpu_exec_interrupt() and
need_replay_interrupt().

Simplify code by moving interrupt_request caching and CPU_INTERRUPT_SSTEP_MASK
masking into the block where it actually matters and drop reloading cached value
from CPUState:interrupt_request as the rest of the code directly uses
CPUState:interrupt_request.

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
---
 accel/tcg/cpu-exec.c | 15 ++++++---------
 1 file changed, 6 insertions(+), 9 deletions(-)

diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c
index 1269c2c6ba..82867f456c 100644
--- a/accel/tcg/cpu-exec.c
+++ b/accel/tcg/cpu-exec.c
@@ -779,13 +779,7 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
     qatomic_set_mb(&cpu->neg.icount_decr.u16.high, 0);
 
     if (unlikely(cpu_test_interrupt(cpu, ~0))) {
-        int interrupt_request;
         bql_lock();
-        interrupt_request = cpu->interrupt_request;
-        if (unlikely(cpu->singlestep_enabled & SSTEP_NOIRQ)) {
-            /* Mask out external interrupts for this step. */
-            interrupt_request &= ~CPU_INTERRUPT_SSTEP_MASK;
-        }
         if (cpu_test_interrupt(cpu, CPU_INTERRUPT_DEBUG)) {
             cpu->interrupt_request &= ~CPU_INTERRUPT_DEBUG;
             cpu->exception_index = EXCP_DEBUG;
@@ -804,6 +798,7 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
             return true;
         } else {
             const TCGCPUOps *tcg_ops = cpu->cc->tcg_ops;
+            int interrupt_request = cpu->interrupt_request;
 
             if (cpu_test_interrupt(cpu, CPU_INTERRUPT_RESET)) {
                 replay_interrupt();
@@ -812,6 +807,11 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
                 return true;
             }
 
+            if (unlikely(cpu->singlestep_enabled & SSTEP_NOIRQ)) {
+                /* Mask out external interrupts for this step. */
+                interrupt_request &= ~CPU_INTERRUPT_SSTEP_MASK;
+            }
+
             /*
              * The target hook has 3 exit conditions:
              * False when the interrupt isn't processed,
@@ -836,9 +836,6 @@ static inline bool cpu_handle_interrupt(CPUState *cpu,
                 cpu->exception_index = -1;
                 *last_tb = NULL;
             }
-            /* The target hook may have updated the 'cpu->interrupt_request';
-             * reload the 'interrupt_request' value */
-            interrupt_request = cpu->interrupt_request;
         }
 #endif /* !CONFIG_USER_ONLY */
         if (cpu_test_interrupt(cpu, CPU_INTERRUPT_EXITTB)) {
-- 
2.47.1



  parent reply	other threads:[~2025-08-08 14:03 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-08-08 12:01 [PATCH v3 00/10] Reinvent BQL-free PIO/MMIO Igor Mammedov
2025-08-08 12:01 ` [PATCH v3 01/10] memory: reintroduce BQL-free fine-grained PIO/MMIO Igor Mammedov
2025-08-08 12:12   ` David Hildenbrand
2025-08-08 14:36     ` Igor Mammedov
2025-08-08 15:24       ` David Hildenbrand
2025-08-11 12:08         ` Igor Mammedov
2025-08-11 15:54   ` Peter Xu
2025-08-08 12:01 ` [PATCH v3 02/10] acpi: mark PMTIMER as unlocked Igor Mammedov
2025-08-11 15:55   ` Peter Xu
2025-08-08 12:01 ` [PATCH v3 03/10] hpet: switch to fain-grained device locking Igor Mammedov
2025-08-11 15:56   ` Peter Xu
2025-08-08 12:01 ` [PATCH v3 04/10] hpet: move out main counter read into a separate block Igor Mammedov
2025-08-11 15:56   ` Peter Xu
2025-08-08 12:01 ` [PATCH v3 05/10] hpet: make main counter read lock-less Igor Mammedov
2025-08-11 15:58   ` Peter Xu
2025-08-08 12:01 ` [PATCH v3 06/10] introduce cpu_test_interrupt() that will replace open coded checks Igor Mammedov
2025-08-11 16:31   ` Peter Xu
2025-08-12 15:00     ` Igor Mammedov
2025-08-12 16:10       ` Peter Xu
2025-08-08 12:01 ` [PATCH v3 07/10] x86: kvm: use cpu_test_interrupt() instead of oppen coding checks Igor Mammedov
2025-08-08 12:01 ` [PATCH v3 08/10] kvm: i386: irqchip: take BQL only if there is an interrupt Igor Mammedov
2025-08-11 16:22   ` Peter Xu
2025-08-08 12:01 ` [PATCH v3 09/10] use cpu_test_interrupt() instead of oppen coding checks tree wide Igor Mammedov
2025-08-08 12:01 ` Igor Mammedov [this message]
2025-08-11  5:36 ` [PATCH v3 00/10] Reinvent BQL-free PIO/MMIO Michael S. Tsirkin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250808120137.2208800-11-imammedo@redhat.com \
    --to=imammedo@redhat.com \
    --cc=david@redhat.com \
    --cc=mst@redhat.com \
    --cc=mtosatti@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=peterx@redhat.com \
    --cc=philmd@linaro.org \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).