From: Paolo Bonzini <pbonzini@redhat.com>
To: qemu-devel@nongnu.org
Cc: v.maffione@gmail.com
Subject: [Qemu-devel] [PATCH 3/8] memory: reorder MemoryRegion fields
Date: Wed, 16 Dec 2015 11:59:55 +0100 [thread overview]
Message-ID: <1450263601-2828-4-git-send-email-pbonzini@redhat.com> (raw)
In-Reply-To: <1450263601-2828-1-git-send-email-pbonzini@redhat.com>
Order fields so that all fields accessed during a RAM read/write fit in
the same cache line.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
include/exec/memory.h | 24 ++++++++++++++----------
1 file changed, 14 insertions(+), 10 deletions(-)
diff --git a/include/exec/memory.h b/include/exec/memory.h
index 9bbd247..5b1fd12 100644
--- a/include/exec/memory.h
+++ b/include/exec/memory.h
@@ -159,27 +159,32 @@ typedef struct MemoryRegionIoeventfd MemoryRegionIoeventfd;
struct MemoryRegion {
Object parent_obj;
+
/* All fields are private - violators will be prosecuted */
- const MemoryRegionOps *ops;
+
+ /* The following fields should fit in a cache line */
+ bool romd_mode;
+ bool ram;
+ bool subpage;
+ bool readonly; /* For RAM regions */
+ bool rom_device;
+ bool flush_coalesced_mmio;
+ bool global_locking;
+ uint8_t dirty_log_mask;
+ ram_addr_t ram_addr;
const MemoryRegionIOMMUOps *iommu_ops;
+
+ const MemoryRegionOps *ops;
void *opaque;
MemoryRegion *container;
Int128 size;
hwaddr addr;
void (*destructor)(MemoryRegion *mr);
- ram_addr_t ram_addr;
uint64_t align;
- bool subpage;
bool terminates;
- bool romd_mode;
- bool ram;
bool skip_dump;
- bool readonly; /* For RAM regions */
bool enabled;
- bool rom_device;
bool warning_printed; /* For reservations */
- bool flush_coalesced_mmio;
- bool global_locking;
uint8_t vga_logging_count;
MemoryRegion *alias;
hwaddr alias_offset;
@@ -189,7 +194,6 @@ struct MemoryRegion {
QTAILQ_ENTRY(MemoryRegion) subregions_link;
QTAILQ_HEAD(coalesced_ranges, CoalescedMemoryRange) coalesced;
const char *name;
- uint8_t dirty_log_mask;
unsigned ioeventfd_nb;
MemoryRegionIoeventfd *ioeventfds;
NotifierList iommu_notify;
--
2.5.0
next prev parent reply other threads:[~2015-12-16 11:00 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-12-16 10:59 [Qemu-devel] [PATCH 0/8] Optimize address_space_read/write/map Paolo Bonzini
2015-12-16 10:59 ` [Qemu-devel] [PATCH 1/8] exec: always call qemu_get_ram_ptr within rcu_read_lock Paolo Bonzini
2015-12-16 10:59 ` [Qemu-devel] [PATCH 2/8] exec: make qemu_ram_ptr_length more similar to qemu_get_ram_ptr Paolo Bonzini
2015-12-16 10:59 ` Paolo Bonzini [this message]
2015-12-16 10:59 ` [Qemu-devel] [PATCH 4/8] memory: avoid unnecessary object_ref/unref Paolo Bonzini
2015-12-16 10:59 ` [Qemu-devel] [PATCH 5/8] memory: split address_space_read and address_space_write Paolo Bonzini
2015-12-16 10:59 ` [Qemu-devel] [PATCH 6/8] memory: extract first iteration of " Paolo Bonzini
2015-12-16 10:59 ` [Qemu-devel] [PATCH 7/8] memory: inline a few small accessors Paolo Bonzini
2015-12-16 11:00 ` [Qemu-devel] [PATCH 8/8] memory: try to inline constant-length reads Paolo Bonzini
2015-12-16 11:00 ` [Qemu-devel] [PATCH] " Paolo Bonzini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1450263601-2828-4-git-send-email-pbonzini@redhat.com \
--to=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=v.maffione@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).