From: peterx@redhat.com
To: qemu-devel@nongnu.org, Peter Maydell <peter.maydell@linaro.org>
Cc: "Paolo Bonzini" <pbonzini@redhat.com>,
peterx@redhat.com, "Fabiano Rosas" <farosas@suse.de>,
"David Hildenbrand" <david@redhat.com>,
"Prasad Pandit" <ppandit@redhat.com>,
"Jonathan Cameron" <Jonathan.Cameron@huawei.com>,
"Philippe Mathieu-Daudé" <philmd@linaro.org>
Subject: [PULL 12/34] physmem: Reduce local variable scope in flatview_read/write_continue()
Date: Mon, 11 Mar 2024 17:59:03 -0400 [thread overview]
Message-ID: <20240311215925.40618-13-peterx@redhat.com> (raw)
In-Reply-To: <20240311215925.40618-1-peterx@redhat.com>
From: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Precursor to factoring out the inner loops for reuse.
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Link: https://lore.kernel.org/r/20240307153710.30907-3-Jonathan.Cameron@huawei.com
Signed-off-by: Peter Xu <peterx@redhat.com>
---
system/physmem.c | 40 ++++++++++++++++++++--------------------
1 file changed, 20 insertions(+), 20 deletions(-)
diff --git a/system/physmem.c b/system/physmem.c
index e92bed50a6..e35aa29343 100644
--- a/system/physmem.c
+++ b/system/physmem.c
@@ -2688,10 +2688,7 @@ static MemTxResult flatview_write_continue(FlatView *fv, hwaddr addr,
hwaddr len, hwaddr mr_addr,
hwaddr l, MemoryRegion *mr)
{
- uint8_t *ram_ptr;
- uint64_t val;
MemTxResult result = MEMTX_OK;
- bool release_lock = false;
const uint8_t *buf = ptr;
for (;;) {
@@ -2699,7 +2696,9 @@ static MemTxResult flatview_write_continue(FlatView *fv, hwaddr addr,
result |= MEMTX_ACCESS_ERROR;
/* Keep going. */
} else if (!memory_access_is_direct(mr, true)) {
- release_lock |= prepare_mmio_access(mr);
+ uint64_t val;
+ bool release_lock = prepare_mmio_access(mr);
+
l = memory_access_size(mr, l, mr_addr);
/* XXX: could force current_cpu to NULL to avoid
potential bugs */
@@ -2717,18 +2716,21 @@ static MemTxResult flatview_write_continue(FlatView *fv, hwaddr addr,
val = ldn_he_p(buf, l);
result |= memory_region_dispatch_write(mr, mr_addr, val,
size_memop(l), attrs);
+ if (release_lock) {
+ bql_unlock();
+ }
+
+
} else {
/* RAM case */
- ram_ptr = qemu_ram_ptr_length(mr->ram_block, mr_addr, &l, false);
+
+ uint8_t *ram_ptr = qemu_ram_ptr_length(mr->ram_block, mr_addr, &l,
+ false);
+
memmove(ram_ptr, buf, l);
invalidate_and_set_dirty(mr, mr_addr, l);
}
- if (release_lock) {
- bql_unlock();
- release_lock = false;
- }
-
len -= l;
buf += l;
addr += l;
@@ -2767,10 +2769,7 @@ MemTxResult flatview_read_continue(FlatView *fv, hwaddr addr,
hwaddr len, hwaddr mr_addr, hwaddr l,
MemoryRegion *mr)
{
- uint8_t *ram_ptr;
- uint64_t val;
MemTxResult result = MEMTX_OK;
- bool release_lock = false;
uint8_t *buf = ptr;
fuzz_dma_read_cb(addr, len, mr);
@@ -2780,7 +2779,9 @@ MemTxResult flatview_read_continue(FlatView *fv, hwaddr addr,
/* Keep going. */
} else if (!memory_access_is_direct(mr, false)) {
/* I/O case */
- release_lock |= prepare_mmio_access(mr);
+ uint64_t val;
+ bool release_lock = prepare_mmio_access(mr);
+
l = memory_access_size(mr, l, mr_addr);
result |= memory_region_dispatch_read(mr, mr_addr, &val,
size_memop(l), attrs);
@@ -2796,17 +2797,16 @@ MemTxResult flatview_read_continue(FlatView *fv, hwaddr addr,
(l == 8 && len >= 8));
#endif
stn_he_p(buf, l, val);
+ if (release_lock) {
+ bql_unlock();
+ }
} else {
/* RAM case */
- ram_ptr = qemu_ram_ptr_length(mr->ram_block, mr_addr, &l, false);
+ uint8_t *ram_ptr = qemu_ram_ptr_length(mr->ram_block, mr_addr, &l,
+ false);
memcpy(buf, ram_ptr, l);
}
- if (release_lock) {
- bql_unlock();
- release_lock = false;
- }
-
len -= l;
buf += l;
addr += l;
--
2.44.0
next prev parent reply other threads:[~2024-03-11 22:04 UTC|newest]
Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-03-11 21:58 [PULL 00/34] Migration 20240311 patches peterx
2024-03-11 21:58 ` [PULL 01/34] migration: Don't serialize devices in qemu_savevm_state_iterate() peterx
2024-03-11 21:58 ` [PULL 02/34] vfio/migration: Refactor vfio_save_state() return value peterx
2024-03-11 21:58 ` [PULL 03/34] vfio/migration: Add a note about migration rate limiting peterx
2024-03-11 21:58 ` [PULL 04/34] migration/ram: add additional check peterx
2024-03-11 21:58 ` [PULL 05/34] migration: Report error when shutdown fails peterx
2024-03-11 21:58 ` [PULL 06/34] migration: Remove SaveStateHandler and LoadStateHandler typedefs peterx
2024-03-11 21:58 ` [PULL 07/34] migration: Add documentation for SaveVMHandlers peterx
2024-03-11 21:58 ` [PULL 08/34] migration: Do not call PRECOPY_NOTIFY_SETUP notifiers in case of error peterx
2024-03-11 21:59 ` [PULL 09/34] migration/multifd: Don't fsync when closing QIOChannelFile peterx
2024-03-11 21:59 ` [PULL 10/34] migration/rdma: Fix a memory issue for migration peterx
2024-03-11 21:59 ` [PULL 11/34] physmem: Rename addr1 to more informative mr_addr in flatview_read/write() and similar peterx
2024-03-11 21:59 ` peterx [this message]
2024-03-11 21:59 ` [PULL 13/34] physmem: Factor out body of flatview_read/write_continue() loop peterx
2024-03-11 21:59 ` [PULL 14/34] physmem: Fix wrong address in large address_space_read/write_cached_slow() peterx
2024-03-11 21:59 ` [PULL 15/34] migration: Fix format in error message peterx
2024-03-11 21:59 ` [PULL 16/34] migration: export fewer options peterx
2024-03-11 21:59 ` [PULL 17/34] migration: remove migration.h references peterx
2024-03-11 21:59 ` [PULL 18/34] migration: export migration_is_setup_or_active peterx
2024-03-11 21:59 ` [PULL 19/34] migration: export migration_is_active peterx
2024-03-11 21:59 ` [PULL 20/34] migration: export migration_is_running peterx
2024-03-11 21:59 ` [PULL 21/34] migration: export vcpu_dirty_limit_period peterx
2024-03-11 21:59 ` [PULL 22/34] migration: migration_thread_is_self peterx
2024-03-11 21:59 ` [PULL 23/34] migration: migration_is_device peterx
2024-03-11 21:59 ` [PULL 24/34] migration: migration_file_set_error peterx
2024-03-11 21:59 ` [PULL 25/34] migration: privatize colo interfaces peterx
2024-03-11 21:59 ` [PULL 26/34] migration: delete unused accessors peterx
2024-03-11 21:59 ` [PULL 27/34] migration: purge MigrationState from public interface peterx
2024-03-11 21:59 ` [PULL 28/34] migration/multifd: Allow zero pages in file migration peterx
2024-03-11 21:59 ` [PULL 29/34] migration/multifd: Allow clearing of the file_bmap from multifd peterx
2024-03-11 21:59 ` [PULL 30/34] migration/multifd: Add new migration option zero-page-detection peterx
2024-03-11 21:59 ` [PULL 31/34] migration/multifd: Implement zero page transmission on the multifd thread peterx
2024-03-11 21:59 ` [PULL 32/34] migration/multifd: Implement ram_save_target_page_multifd to handle multifd version of MigrationOps::ram_save_target_page peterx
2024-03-11 21:59 ` [PULL 33/34] migration/multifd: Enable multifd zero page checking by default peterx
2024-03-11 21:59 ` [PULL 34/34] migration/multifd: Add new migration test cases for legacy zero page checking peterx
2024-03-12 13:07 ` [PULL 00/34] Migration 20240311 patches Peter Maydell
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240311215925.40618-13-peterx@redhat.com \
--to=peterx@redhat.com \
--cc=Jonathan.Cameron@huawei.com \
--cc=david@redhat.com \
--cc=farosas@suse.de \
--cc=pbonzini@redhat.com \
--cc=peter.maydell@linaro.org \
--cc=philmd@linaro.org \
--cc=ppandit@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).