From: Manos Pitsidianakis <manos.pitsidianakis@linaro.org>
To: qemu-arm@nongnu.org,
"Philippe Mathieu-Daudé " <philmd@linaro.org>,
qemu-devel@nongnu.org
Cc: "Anthony Perard" <anthony.perard@citrix.com>,
"Paul Durrant" <paul@xen.org>,
"David Woodhouse" <dwmw@amazon.co.uk>,
"Thomas Huth" <thuth@redhat.com>,
qemu-arm@nongnu.org,
"Stefano Stabellini" <sstabellini@kernel.org>,
"Paolo Bonzini" <pbonzini@redhat.com>,
"Alex Benné e" <alex.bennee@linaro.org>,
xen-devel@lists.xenproject.org,
"Philippe Mathieu-Daudé " <philmd@linaro.org>
Subject: Re: [PATCH-for-9.0 9/9] hw/xen/hvm: Inline xen_arch_set_memory()
Date: Thu, 07 Mar 2024 14:11:03 +0200 [thread overview]
Message-ID: <9z8lx.2kzq0em3zqbp@linaro.org> (raw)
In-Reply-To: <20231114163123.74888-10-philmd@linaro.org>
On Tue, 14 Nov 2023 18:31, Philippe Mathieu-Daudé <philmd@linaro.org> wrote:
>xen_arch_set_memory() is not arch-specific anymore. Being
>called once, inline it in xen_set_memory().
>
>Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
>---
> include/hw/xen/xen-hvm-common.h | 3 -
> hw/xen/xen-hvm-common.c | 104 ++++++++++++++++----------------
> 2 files changed, 51 insertions(+), 56 deletions(-)
>
>diff --git a/include/hw/xen/xen-hvm-common.h b/include/hw/xen/xen-hvm-common.h
>index 536712dc83..a1b8a2783b 100644
>--- a/include/hw/xen/xen-hvm-common.h
>+++ b/include/hw/xen/xen-hvm-common.h
>@@ -99,8 +99,5 @@ void cpu_ioreq_pio(ioreq_t *req);
>
> void xen_read_physmap(XenIOState *state);
> void xen_arch_handle_ioreq(XenIOState *state, ioreq_t *req);
>-void xen_arch_set_memory(XenIOState *state,
>- MemoryRegionSection *section,
>- bool add);
>
> #endif /* HW_XEN_HVM_COMMON_H */
>diff --git a/hw/xen/xen-hvm-common.c b/hw/xen/xen-hvm-common.c
>index 50ce45effc..789c6b4b7a 100644
>--- a/hw/xen/xen-hvm-common.c
>+++ b/hw/xen/xen-hvm-common.c
>@@ -426,50 +426,6 @@ void qmp_xen_set_global_dirty_log(bool enable, Error **errp)
> }
> }
>
>-void xen_arch_set_memory(XenIOState *state, MemoryRegionSection *section,
>- bool add)
>-{
>- unsigned target_page_bits = qemu_target_page_bits();
>- int page_size = qemu_target_page_size();
>- int page_mask = -page_size;
>- hwaddr start_addr = section->offset_within_address_space;
>- ram_addr_t size = int128_get64(section->size);
>- bool log_dirty = memory_region_is_logging(section->mr, DIRTY_MEMORY_VGA);
>- hvmmem_type_t mem_type;
>-
>- if (!memory_region_is_ram(section->mr)) {
>- return;
>- }
>-
>- if (log_dirty != add) {
>- return;
>- }
>-
>- trace_xen_client_set_memory(start_addr, size, log_dirty);
>-
>- start_addr &= page_mask;
>- size = ROUND_UP(size, page_size);
>-
>- if (add) {
>- if (!memory_region_is_rom(section->mr)) {
>- xen_add_to_physmap(state, start_addr, size,
>- section->mr, section->offset_within_region);
>- } else {
>- mem_type = HVMMEM_ram_ro;
>- if (xen_set_mem_type(xen_domid, mem_type,
>- start_addr >> target_page_bits,
>- size >> target_page_bits)) {
>- DPRINTF("xen_set_mem_type error, addr: "HWADDR_FMT_plx"\n",
>- start_addr);
>- }
>- }
>- } else {
>- if (xen_remove_from_physmap(state, start_addr, size) < 0) {
>- DPRINTF("physmapping does not exist at "HWADDR_FMT_plx"\n", start_addr);
>- }
>- }
>-}
>-
> void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size, MemoryRegion *mr,
> Error **errp)
> {
>@@ -512,20 +468,62 @@ static void xen_set_memory(struct MemoryListener *listener,
> bool add)
> {
> XenIOState *state = container_of(listener, XenIOState, memory_listener);
>+ unsigned target_page_bits = qemu_target_page_bits();
>+ int page_size = qemu_target_page_size();
>+ int page_mask = -page_size;
>+ hwaddr start_addr;
>+ ram_addr_t size;
>+ bool log_dirty;
>+ hvmmem_type_t mem_type;
>+
>
> if (section->mr == &xen_memory) {
> return;
>- } else {
>- if (add) {
>- xen_map_memory_section(xen_domid, state->ioservid,
>- section);
>- } else {
>- xen_unmap_memory_section(xen_domid, state->ioservid,
>- section);
>- }
> }
>
>- xen_arch_set_memory(state, section, add);
>+ if (add) {
>+ xen_map_memory_section(xen_domid, state->ioservid,
>+ section);
>+ } else {
>+ xen_unmap_memory_section(xen_domid, state->ioservid,
>+ section);
>+ }
>+
>+ if (!memory_region_is_ram(section->mr)) {
>+ return;
>+ }
>+
>+ log_dirty = memory_region_is_logging(section->mr, DIRTY_MEMORY_VGA);
>+
>+ if (log_dirty != add) {
>+ return;
>+ }
>+
>+ start_addr = section->offset_within_address_space;
>+ size = int128_get64(section->size);
>+ trace_xen_client_set_memory(start_addr, size, log_dirty);
>+
>+ start_addr &= page_mask;
>+ size = ROUND_UP(size, page_size);
>+
>+ if (add) {
>+ if (!memory_region_is_rom(section->mr)) {
>+ xen_add_to_physmap(state, start_addr, size,
>+ section->mr, section->offset_within_region);
>+ } else {
>+ mem_type = HVMMEM_ram_ro;
>+ if (xen_set_mem_type(xen_domid, mem_type,
>+ start_addr >> target_page_bits,
>+ size >> target_page_bits)) {
>+ DPRINTF("xen_set_mem_type error, addr: "HWADDR_FMT_plx"\n",
>+ start_addr);
>+ }
>+ }
>+ } else {
>+ if (xen_remove_from_physmap(state, start_addr, size) < 0) {
>+ DPRINTF("physmapping does not exist at "HWADDR_FMT_plx"\n", start_addr);
>+ }
>+ }
> }
>
> void xen_region_add(MemoryListener *listener,
>--
>2.41.0
>
>
Same observation as in previous patch, in Arm xen, qemu doesn't handle
memory, it is only responsible for devices and their memory.
next prev parent reply other threads:[~2024-03-07 12:13 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-11-14 16:31 [RFC PATCH-for-9.0 0/9] hw/xen: Have ARM targets use common xen_memory_listener Philippe Mathieu-Daudé
2023-11-14 16:31 ` [PATCH-for-9.0 1/9] hw/xen/hvm: Inline TARGET_PAGE_ALIGN() macro Philippe Mathieu-Daudé
2024-03-07 11:43 ` Manos Pitsidianakis
2023-11-14 16:31 ` [PATCH-for-9.0 2/9] hw/xen/hvm: Propagate page_mask to a pair of functions Philippe Mathieu-Daudé
2024-03-07 11:46 ` Manos Pitsidianakis
2023-11-14 16:31 ` [PATCH-for-9.0 3/9] hw/xen/hvm: Get target page size at runtime Philippe Mathieu-Daudé
2024-03-07 11:49 ` Manos Pitsidianakis
2023-11-14 16:31 ` [PATCH-for-9.0 4/9] hw/xen/hvm: Expose xen_memory_listener declaration Philippe Mathieu-Daudé
2024-03-07 11:54 ` Manos Pitsidianakis
2023-11-14 16:31 ` [PATCH-for-9.0 5/9] hw/xen/hvm: Expose xen_read_physmap() prototype Philippe Mathieu-Daudé
2024-03-07 11:55 ` Manos Pitsidianakis
2023-11-14 16:31 ` [RFC PATCH-for-9.0 6/9] hw/xen/hvm: Initialize xen_physmap QLIST in xen_read_physmap() Philippe Mathieu-Daudé
2024-03-07 11:58 ` Manos Pitsidianakis
2023-11-14 16:31 ` [PATCH-for-9.0 7/9] hw/xen/hvm: Extract common code to xen-hvm-common.c Philippe Mathieu-Daudé
2024-03-07 12:01 ` Manos Pitsidianakis
2023-11-14 16:31 ` [RFC PATCH-for-9.0 8/9] hw/xen/hvm: Merge xen-hvm-common.c files Philippe Mathieu-Daudé
2024-03-07 12:03 ` Manos Pitsidianakis
2023-11-14 16:31 ` [PATCH-for-9.0 9/9] hw/xen/hvm: Inline xen_arch_set_memory() Philippe Mathieu-Daudé
2024-03-07 12:11 ` Manos Pitsidianakis [this message]
2023-12-13 17:00 ` [RFC PATCH-for-9.0 0/9] hw/xen: Have ARM targets use common xen_memory_listener Philippe Mathieu-Daudé
2024-03-06 17:03 ` Philippe Mathieu-Daudé
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=9z8lx.2kzq0em3zqbp@linaro.org \
--to=manos.pitsidianakis@linaro.org \
--cc=alex.bennee@linaro.org \
--cc=anthony.perard@citrix.com \
--cc=dwmw@amazon.co.uk \
--cc=paul@xen.org \
--cc=pbonzini@redhat.com \
--cc=philmd@linaro.org \
--cc=qemu-arm@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=sstabellini@kernel.org \
--cc=thuth@redhat.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).